id
int64 0
190k
| prompt
stringlengths 21
13.4M
| docstring
stringlengths 1
12k
⌀ |
---|---|---|
200 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `segment` function. Write a Python function `def segment(text: str)` to solve the following problem:
Segments text into semantic units. Args: text: input text Returns: segmented text
Here is the function:
def segment(text: str):
"""
Segments text into semantic units.
Args:
text: input text
Returns:
segmented text
"""
return application.get().pipeline("segmentation", (text,)) | Segments text into semantic units. Args: text: input text Returns: segmented text |
201 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `batchsegment` function. Write a Python function `def batchsegment(texts: List[str] = Body(...))` to solve the following problem:
Segments text into semantic units. Args: texts: list of texts to segment Returns: list of segmented text
Here is the function:
def batchsegment(texts: List[str] = Body(...)):
"""
Segments text into semantic units.
Args:
texts: list of texts to segment
Returns:
list of segmented text
"""
return application.get().pipeline("segmentation", (texts,)) | Segments text into semantic units. Args: texts: list of texts to segment Returns: list of segmented text |
202 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `entity` function. Write a Python function `def entity(text: str)` to solve the following problem:
Applies a token classifier to text. Args: text: input text Returns: list of (entity, entity type, score) per text element
Here is the function:
def entity(text: str):
"""
Applies a token classifier to text.
Args:
text: input text
Returns:
list of (entity, entity type, score) per text element
"""
return application.get().pipeline("entity", (text,)) | Applies a token classifier to text. Args: text: input text Returns: list of (entity, entity type, score) per text element |
203 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `batchentity` function. Write a Python function `def batchentity(texts: List[str] = Body(...))` to solve the following problem:
Applies a token classifier to text. Args: texts: list of text Returns: list of (entity, entity type, score) per text element
Here is the function:
def batchentity(texts: List[str] = Body(...)):
"""
Applies a token classifier to text.
Args:
texts: list of text
Returns:
list of (entity, entity type, score) per text element
"""
return application.get().pipeline("entity", (texts,)) | Applies a token classifier to text. Args: texts: list of text Returns: list of (entity, entity type, score) per text element |
204 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `similarity` function. Write a Python function `def similarity(query: str = Body(...), texts: List[str] = Body(...))` to solve the following problem:
Computes the similarity between query and list of text. Returns a list of {id: value, score: value} sorted by highest score, where id is the index in texts. Args: query: query text texts: list of text Returns: list of {id: value, score: value}
Here is the function:
def similarity(query: str = Body(...), texts: List[str] = Body(...)):
"""
Computes the similarity between query and list of text. Returns a list of
{id: value, score: value} sorted by highest score, where id is the index
in texts.
Args:
query: query text
texts: list of text
Returns:
list of {id: value, score: value}
"""
return application.get().similarity(query, texts) | Computes the similarity between query and list of text. Returns a list of {id: value, score: value} sorted by highest score, where id is the index in texts. Args: query: query text texts: list of text Returns: list of {id: value, score: value} |
205 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `batchsimilarity` function. Write a Python function `def batchsimilarity(queries: List[str] = Body(...), texts: List[str] = Body(...))` to solve the following problem:
Computes the similarity between list of queries and list of text. Returns a list of {id: value, score: value} sorted by highest score per query, where id is the index in texts. Args: queries: queries text texts: list of text Returns: list of {id: value, score: value} per query
Here is the function:
def batchsimilarity(queries: List[str] = Body(...), texts: List[str] = Body(...)):
"""
Computes the similarity between list of queries and list of text. Returns a list
of {id: value, score: value} sorted by highest score per query, where id is the
index in texts.
Args:
queries: queries text
texts: list of text
Returns:
list of {id: value, score: value} per query
"""
return application.get().batchsimilarity(queries, texts) | Computes the similarity between list of queries and list of text. Returns a list of {id: value, score: value} sorted by highest score per query, where id is the index in texts. Args: queries: queries text texts: list of text Returns: list of {id: value, score: value} per query |
206 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `transcribe` function. Write a Python function `def transcribe(file: str)` to solve the following problem:
Transcribes audio files to text. Args: file: file to transcribe Returns: transcribed text
Here is the function:
def transcribe(file: str):
"""
Transcribes audio files to text.
Args:
file: file to transcribe
Returns:
transcribed text
"""
return application.get().pipeline("transcription", (file,)) | Transcribes audio files to text. Args: file: file to transcribe Returns: transcribed text |
207 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `batchtranscribe` function. Write a Python function `def batchtranscribe(files: List[str] = Body(...))` to solve the following problem:
Transcribes audio files to text. Args: files: list of files to transcribe Returns: list of transcribed text
Here is the function:
def batchtranscribe(files: List[str] = Body(...)):
"""
Transcribes audio files to text.
Args:
files: list of files to transcribe
Returns:
list of transcribed text
"""
return application.get().pipeline("transcription", (files,)) | Transcribes audio files to text. Args: files: list of files to transcribe Returns: list of transcribed text |
208 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `workflow` function. Write a Python function `def workflow(name: str = Body(...), elements: List = Body(...))` to solve the following problem:
Executes a named workflow using elements as input. Args: name: workflow name elements: list of elements to run through workflow Returns: list of processed elements
Here is the function:
def workflow(name: str = Body(...), elements: List = Body(...)):
"""
Executes a named workflow using elements as input.
Args:
name: workflow name
elements: list of elements to run through workflow
Returns:
list of processed elements
"""
return application.get().workflow(name, elements) | Executes a named workflow using elements as input. Args: name: workflow name elements: list of elements to run through workflow Returns: list of processed elements |
209 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `search` function. Write a Python function `def search(query: str, request: Request)` to solve the following problem:
Finds documents most similar to the input query. This method will run either an index search or an index + database search depending on if a database is available. Args: query: input query request: FastAPI request Returns: list of {id: value, score: value} for index search, list of dict for an index + database search
Here is the function:
def search(query: str, request: Request):
"""
Finds documents most similar to the input query. This method will run either an index search
or an index + database search depending on if a database is available.
Args:
query: input query
request: FastAPI request
Returns:
list of {id: value, score: value} for index search, list of dict for an index + database search
"""
# Execute search
results = application.get().search(query, request=request)
# Encode using standard FastAPI encoder but skip certain classes
results = jsonable_encoder(
results, custom_encoder={bytes: lambda x: x, BytesIO: lambda x: x, PIL.Image.Image: lambda x: x, Graph: lambda x: x.savedict()}
)
# Return raw response to prevent duplicate encoding
response = ResponseFactory.create(request)
return response(results) | Finds documents most similar to the input query. This method will run either an index search or an index + database search depending on if a database is available. Args: query: input query request: FastAPI request Returns: list of {id: value, score: value} for index search, list of dict for an index + database search |
210 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `batchsearch` function. Write a Python function `def batchsearch( request: Request, queries: List[str] = Body(...), limit: int = Body(default=None), weights: float = Body(default=None), index: str = Body(default=None), parameters: List[dict] = Body(default=None), graph: bool = Body(default=False), )` to solve the following problem:
Finds documents most similar to the input queries. This method will run either an index search or an index + database search depending on if a database is available. Args: queries: input queries limit: maximum results weights: hybrid score weights, if applicable index: index name, if applicable parameters: list of dicts of named parameters to bind to placeholders graph: return graph results if True Returns: list of {id: value, score: value} per query for index search, list of dict per query for an index + database search
Here is the function:
def batchsearch(
request: Request,
queries: List[str] = Body(...),
limit: int = Body(default=None),
weights: float = Body(default=None),
index: str = Body(default=None),
parameters: List[dict] = Body(default=None),
graph: bool = Body(default=False),
):
"""
Finds documents most similar to the input queries. This method will run either an index search
or an index + database search depending on if a database is available.
Args:
queries: input queries
limit: maximum results
weights: hybrid score weights, if applicable
index: index name, if applicable
parameters: list of dicts of named parameters to bind to placeholders
graph: return graph results if True
Returns:
list of {id: value, score: value} per query for index search, list of dict per query for an index + database search
"""
# Execute search
results = application.get().batchsearch(queries, limit, weights, index, parameters, graph)
# Encode using standard FastAPI encoder but skip certain classes
results = jsonable_encoder(
results, custom_encoder={bytes: lambda x: x, BytesIO: lambda x: x, PIL.Image.Image: lambda x: x, Graph: lambda x: x.savedict()}
)
# Return raw response to prevent duplicate encoding
response = ResponseFactory.create(request)
return response(results) | Finds documents most similar to the input queries. This method will run either an index search or an index + database search depending on if a database is available. Args: queries: input queries limit: maximum results weights: hybrid score weights, if applicable index: index name, if applicable parameters: list of dicts of named parameters to bind to placeholders graph: return graph results if True Returns: list of {id: value, score: value} per query for index search, list of dict per query for an index + database search |
211 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `add` function. Write a Python function `def add(documents: List[dict] = Body(...))` to solve the following problem:
Adds a batch of documents for indexing. Args: documents: list of {id: value, text: value, tags: value}
Here is the function:
def add(documents: List[dict] = Body(...)):
"""
Adds a batch of documents for indexing.
Args:
documents: list of {id: value, text: value, tags: value}
"""
try:
application.get().add(documents)
except ReadOnlyError as e:
raise HTTPException(status_code=403, detail=e.args[0]) from e | Adds a batch of documents for indexing. Args: documents: list of {id: value, text: value, tags: value} |
212 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
def addobject(data: List[bytes] = File(), uid: List[str] = Form(default=None), field: str = Form(default=None)):
"""
Adds a batch of binary documents for indexing.
Args:
data: list of binary objects
uid: list of corresponding ids
field: optional object field name
"""
if uid and len(data) != len(uid):
raise HTTPException(status_code=422, detail="Length of data and document lists must match")
try:
# Add objects
application.get().addobject(data, uid, field)
except ReadOnlyError as e:
raise HTTPException(status_code=403, detail=e.args[0]) from e
The provided code snippet includes necessary dependencies for implementing the `addimage` function. Write a Python function `def addimage(data: List[UploadFile] = File(), uid: List[str] = Form(), field: str = Form(default=None))` to solve the following problem:
Adds a batch of images for indexing. Args: data: list of images uid: list of corresponding ids field: optional object field name
Here is the function:
def addimage(data: List[UploadFile] = File(), uid: List[str] = Form(), field: str = Form(default=None)):
"""
Adds a batch of images for indexing.
Args:
data: list of images
uid: list of corresponding ids
field: optional object field name
"""
if uid and len(data) != len(uid):
raise HTTPException(status_code=422, detail="Length of data and uid lists must match")
try:
# Add images
application.get().addobject([PIL.Image.open(content.file) for content in data], uid, field)
except ReadOnlyError as e:
raise HTTPException(status_code=403, detail=e.args[0]) from e | Adds a batch of images for indexing. Args: data: list of images uid: list of corresponding ids field: optional object field name |
213 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `index` function. Write a Python function `def index()` to solve the following problem:
Builds an embeddings index for previously batched documents.
Here is the function:
def index():
"""
Builds an embeddings index for previously batched documents.
"""
try:
application.get().index()
except ReadOnlyError as e:
raise HTTPException(status_code=403, detail=e.args[0]) from e | Builds an embeddings index for previously batched documents. |
214 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `upsert` function. Write a Python function `def upsert()` to solve the following problem:
Runs an embeddings upsert operation for previously batched documents.
Here is the function:
def upsert():
"""
Runs an embeddings upsert operation for previously batched documents.
"""
try:
application.get().upsert()
except ReadOnlyError as e:
raise HTTPException(status_code=403, detail=e.args[0]) from e | Runs an embeddings upsert operation for previously batched documents. |
215 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `delete` function. Write a Python function `def delete(ids: List = Body(...))` to solve the following problem:
Deletes from an embeddings index. Returns list of ids deleted. Args: ids: list of ids to delete Returns: ids deleted
Here is the function:
def delete(ids: List = Body(...)):
"""
Deletes from an embeddings index. Returns list of ids deleted.
Args:
ids: list of ids to delete
Returns:
ids deleted
"""
try:
return application.get().delete(ids)
except ReadOnlyError as e:
raise HTTPException(status_code=403, detail=e.args[0]) from e | Deletes from an embeddings index. Returns list of ids deleted. Args: ids: list of ids to delete Returns: ids deleted |
216 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `reindex` function. Write a Python function `def reindex(config: dict = Body(...), function: str = Body(default=None))` to solve the following problem:
Recreates this embeddings index using config. This method only works if document content storage is enabled. Args: config: new config function: optional function to prepare content for indexing
Here is the function:
def reindex(config: dict = Body(...), function: str = Body(default=None)):
"""
Recreates this embeddings index using config. This method only works if document content storage is enabled.
Args:
config: new config
function: optional function to prepare content for indexing
"""
try:
application.get().reindex(config, function)
except ReadOnlyError as e:
raise HTTPException(status_code=403, detail=e.args[0]) from e | Recreates this embeddings index using config. This method only works if document content storage is enabled. Args: config: new config function: optional function to prepare content for indexing |
217 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `count` function. Write a Python function `def count()` to solve the following problem:
Deletes from an embeddings index. Returns list of ids deleted. Args: ids: list of ids to delete Returns: ids deleted
Here is the function:
def count():
"""
Deletes from an embeddings index. Returns list of ids deleted.
Args:
ids: list of ids to delete
Returns:
ids deleted
"""
return application.get().count() | Deletes from an embeddings index. Returns list of ids deleted. Args: ids: list of ids to delete Returns: ids deleted |
218 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `explain` function. Write a Python function `def explain(query: str = Body(...), texts: List[str] = Body(default=None), limit: int = Body(default=None))` to solve the following problem:
Explains the importance of each input token in text for a query. Args: query: query text texts: list of text Returns: list of dict where a higher scores represents higher importance relative to the query
Here is the function:
def explain(query: str = Body(...), texts: List[str] = Body(default=None), limit: int = Body(default=None)):
"""
Explains the importance of each input token in text for a query.
Args:
query: query text
texts: list of text
Returns:
list of dict where a higher scores represents higher importance relative to the query
"""
return application.get().explain(query, texts, limit) | Explains the importance of each input token in text for a query. Args: query: query text texts: list of text Returns: list of dict where a higher scores represents higher importance relative to the query |
219 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `batchexplain` function. Write a Python function `def batchexplain(queries: List[str] = Body(...), texts: List[str] = Body(default=None), limit: int = Body(default=None))` to solve the following problem:
Explains the importance of each input token in text for a query. Args: query: query text texts: list of text Returns: list of dict where a higher scores represents higher importance relative to the query
Here is the function:
def batchexplain(queries: List[str] = Body(...), texts: List[str] = Body(default=None), limit: int = Body(default=None)):
"""
Explains the importance of each input token in text for a query.
Args:
query: query text
texts: list of text
Returns:
list of dict where a higher scores represents higher importance relative to the query
"""
return application.get().batchexplain(queries, texts, limit) | Explains the importance of each input token in text for a query. Args: query: query text texts: list of text Returns: list of dict where a higher scores represents higher importance relative to the query |
220 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `transform` function. Write a Python function `def transform(text: str)` to solve the following problem:
Transforms text into an embeddings array. Args: text: input text Returns: embeddings array
Here is the function:
def transform(text: str):
"""
Transforms text into an embeddings array.
Args:
text: input text
Returns:
embeddings array
"""
return application.get().transform(text) | Transforms text into an embeddings array. Args: text: input text Returns: embeddings array |
221 | from io import BytesIO
from typing import List
import PIL
from fastapi import APIRouter, Body, File, Form, HTTPException, Request, UploadFile
from fastapi.encoders import jsonable_encoder
from .. import application
from ..responses import ResponseFactory
from ..route import EncodingAPIRoute
from ...app import ReadOnlyError
from ...graph import Graph
The provided code snippet includes necessary dependencies for implementing the `batchtransform` function. Write a Python function `def batchtransform(texts: List[str] = Body(...))` to solve the following problem:
Transforms list of text into embeddings arrays. Args: texts: list of text Returns: embeddings arrays
Here is the function:
def batchtransform(texts: List[str] = Body(...)):
"""
Transforms list of text into embeddings arrays.
Args:
texts: list of text
Returns:
embeddings arrays
"""
return application.get().batchtransform(texts) | Transforms list of text into embeddings arrays. Args: texts: list of text Returns: embeddings arrays |
222 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `caption` function. Write a Python function `def caption(file: str)` to solve the following problem:
Builds captions for images. Args: file: file to process Returns: list of captions
Here is the function:
def caption(file: str):
"""
Builds captions for images.
Args:
file: file to process
Returns:
list of captions
"""
return application.get().pipeline("caption", (file,)) | Builds captions for images. Args: file: file to process Returns: list of captions |
223 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `batchcaption` function. Write a Python function `def batchcaption(files: List[str] = Body(...))` to solve the following problem:
Builds captions for images. Args: files: list of files to process Returns: list of captions
Here is the function:
def batchcaption(files: List[str] = Body(...)):
"""
Builds captions for images.
Args:
files: list of files to process
Returns:
list of captions
"""
return application.get().pipeline("caption", (files,)) | Builds captions for images. Args: files: list of files to process Returns: list of captions |
224 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `objects` function. Write a Python function `def objects(file: str)` to solve the following problem:
Applies object detection/image classification models to images. Args: file: file to process Returns: list of (label, score) elements
Here is the function:
def objects(file: str):
"""
Applies object detection/image classification models to images.
Args:
file: file to process
Returns:
list of (label, score) elements
"""
return application.get().pipeline("objects", (file,)) | Applies object detection/image classification models to images. Args: file: file to process Returns: list of (label, score) elements |
225 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `batchobjects` function. Write a Python function `def batchobjects(files: List[str] = Body(...))` to solve the following problem:
Applies object detection/image classification models to images. Args: files: list of files to process Returns: list of (label, score) elements
Here is the function:
def batchobjects(files: List[str] = Body(...)):
"""
Applies object detection/image classification models to images.
Args:
files: list of files to process
Returns:
list of (label, score) elements
"""
return application.get().pipeline("objects", (files,)) | Applies object detection/image classification models to images. Args: files: list of files to process Returns: list of (label, score) elements |
226 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `textract` function. Write a Python function `def textract(file: str)` to solve the following problem:
Extracts text from a file at path. Args: file: file to extract text Returns: extracted text
Here is the function:
def textract(file: str):
"""
Extracts text from a file at path.
Args:
file: file to extract text
Returns:
extracted text
"""
return application.get().pipeline("textractor", (file,)) | Extracts text from a file at path. Args: file: file to extract text Returns: extracted text |
227 | from typing import List
from fastapi import APIRouter, Body
from .. import application
from ..route import EncodingAPIRoute
The provided code snippet includes necessary dependencies for implementing the `batchtextract` function. Write a Python function `def batchtextract(files: List[str] = Body(...))` to solve the following problem:
Extracts text from a file at path. Args: files: list of files to extract text Returns: list of extracted text
Here is the function:
def batchtextract(files: List[str] = Body(...)):
"""
Extracts text from a file at path.
Args:
files: list of files to extract text
Returns:
list of extracted text
"""
return application.get().pipeline("textractor", (files,)) | Extracts text from a file at path. Args: files: list of files to extract text Returns: list of extracted text |
228 | import inspect
import os
import sys
from fastapi import APIRouter, Depends, FastAPI
from .authorization import Authorization
from .base import API
from .factory import APIFactory
from ..app import Application
def lifespan(application):
"""
FastAPI lifespan event handler.
Args:
application: FastAPI application to initialize
"""
# pylint: disable=W0603
global INSTANCE
# Load YAML settings
config = Application.read(os.environ.get("CONFIG"))
# Instantiate API instance
api = os.environ.get("API_CLASS")
INSTANCE = APIFactory.create(config, api) if api else API(config)
# Get all known routers
routers = apirouters()
# Conditionally add routes based on configuration
for name, router in routers.items():
if name in config:
application.include_router(router)
# Special case for embeddings clusters
if "cluster" in config and "embeddings" not in config:
application.include_router(routers["embeddings"])
# Special case to add similarity instance for embeddings
if "embeddings" in config and "similarity" not in config:
application.include_router(routers["similarity"])
# Execute extensions if present
extensions = os.environ.get("EXTENSIONS")
if extensions:
for extension in extensions.split(","):
# Create instance and execute extension
extension = APIFactory.get(extension.strip())()
extension(application)
yield
app, INSTANCE = create(), None
The provided code snippet includes necessary dependencies for implementing the `start` function. Write a Python function `def start()` to solve the following problem:
Runs application lifespan handler.
Here is the function:
def start():
"""
Runs application lifespan handler.
"""
list(lifespan(app)) | Runs application lifespan handler. |
229 | import logging
import os
import pickle
import tempfile
from errno import ENOENT
from multiprocessing import Pool
import numpy as np
from ..pipeline import Tokenizer
from ..version import __pickle__
from .base import Vectors
VECTORS = None
class WordVectors(Vectors):
"""
Builds sentence embeddings/vectors using weighted word embeddings.
"""
def loadmodel(self, path):
# Ensure that vector path exists
if not path or not os.path.isfile(path):
raise IOError(ENOENT, "Vector model file not found", path)
# Load magnitude model. If this is a training run (uninitialized config), block until vectors are fully loaded
return Magnitude(path, case_insensitive=True, blocking=not self.initialized)
def encode(self, data):
# Iterate over each data element, tokenize (if necessary) and build an aggregated embeddings vector
embeddings = []
for tokens in data:
# Convert to tokens if necessary
if isinstance(tokens, str):
tokens = Tokenizer.tokenize(tokens)
# Generate weights for each vector using a scoring method
weights = self.scoring.weights(tokens) if self.scoring else None
# pylint: disable=E1133
if weights and [x for x in weights if x > 0]:
# Build weighted average embeddings vector. Create weights array as float32 to match embeddings precision.
embedding = np.average(self.lookup(tokens), weights=np.array(weights, dtype=np.float32), axis=0)
else:
# If no weights, use mean
embedding = np.mean(self.lookup(tokens), axis=0)
embeddings.append(embedding)
return np.array(embeddings, dtype=np.float32)
def index(self, documents, batchsize=1):
# Use default single process indexing logic
if "parallel" in self.config and not self.config["parallel"]:
return super().index(documents, batchsize)
# Customize indexing logic with multiprocessing pool to efficiently build vectors
ids, dimensions, batches, stream = [], None, 0, None
# Shared objects with Pool
args = (self.config, self.scoring)
# Convert all documents to embedding arrays, stream embeddings to disk to control memory usage
with Pool(os.cpu_count(), initializer=create, initargs=args) as pool:
with tempfile.NamedTemporaryFile(mode="wb", suffix=".npy", delete=False) as output:
stream = output.name
embeddings = []
for uid, embedding in pool.imap(transform, documents):
if not dimensions:
# Set number of dimensions for embeddings
dimensions = embedding.shape[0]
ids.append(uid)
embeddings.append(embedding)
if len(embeddings) == batchsize:
pickle.dump(np.array(embeddings, dtype=np.float32), output, protocol=__pickle__)
batches += 1
embeddings = []
# Final embeddings batch
if embeddings:
pickle.dump(np.array(embeddings, dtype=np.float32), output, protocol=__pickle__)
batches += 1
return (ids, dimensions, batches, stream)
def lookup(self, tokens):
"""
Queries word vectors for given list of input tokens.
Args:
tokens: list of tokens to query
Returns:
word vectors array
"""
return self.model.query(tokens)
def isdatabase(path):
"""
Checks if this is a SQLite database file which is the file format used for word vectors databases.
Args:
path: path to check
Returns:
True if this is a SQLite database
"""
if isinstance(path, str) and os.path.isfile(path) and os.path.getsize(path) >= 100:
# Read 100 byte SQLite header
with open(path, "rb") as f:
header = f.read(100)
# Check for SQLite header
return header.startswith(b"SQLite format 3\000")
return False
def build(data, size, mincount, path):
"""
Builds fastText vectors from a file.
Args:
data: path to input data file
size: number of vector dimensions
mincount: minimum number of occurrences required to register a token
path: path to output file
"""
# Train on data file using largest dimension size
model = fasttext.train_unsupervised(data, dim=size, minCount=mincount)
# Output file path
logger.info("Building %d dimension model", size)
# Output vectors in vec/txt format
with open(path + ".txt", "w", encoding="utf-8") as output:
words = model.get_words()
output.write(f"{len(words)} {model.get_dimension()}\n")
for word in words:
# Skip end of line token
if word != "</s>":
vector = model.get_word_vector(word)
data = ""
for v in vector:
data += " " + str(v)
output.write(word + data + "\n")
# Build magnitude vectors database
logger.info("Converting vectors to magnitude format")
converter.convert(path + ".txt", path + ".magnitude", subword=True)
The provided code snippet includes necessary dependencies for implementing the `create` function. Write a Python function `def create(config, scoring)` to solve the following problem:
Multiprocessing helper method. Creates a global embeddings object to be accessed in a new subprocess. Args: config: vector configuration scoring: scoring instance
Here is the function:
def create(config, scoring):
"""
Multiprocessing helper method. Creates a global embeddings object to be accessed in a new subprocess.
Args:
config: vector configuration
scoring: scoring instance
"""
global VECTORS
# Create a global embedding object using configuration and saved
VECTORS = WordVectors(config, scoring, None) | Multiprocessing helper method. Creates a global embeddings object to be accessed in a new subprocess. Args: config: vector configuration scoring: scoring instance |
230 | import logging
import os
import pickle
import tempfile
from errno import ENOENT
from multiprocessing import Pool
import numpy as np
from ..pipeline import Tokenizer
from ..version import __pickle__
from .base import Vectors
VECTORS = None
The provided code snippet includes necessary dependencies for implementing the `transform` function. Write a Python function `def transform(document)` to solve the following problem:
Multiprocessing helper method. Transforms document into an embeddings vector. Args: document: (id, data, tags) Returns: (id, embedding)
Here is the function:
def transform(document):
"""
Multiprocessing helper method. Transforms document into an embeddings vector.
Args:
document: (id, data, tags)
Returns:
(id, embedding)
"""
return (document[0], VECTORS.transform(document)) | Multiprocessing helper method. Transforms document into an embeddings vector. Args: document: (id, data, tags) Returns: (id, embedding) |
231 |
The provided code snippet includes necessary dependencies for implementing the `idcolumn` function. Write a Python function `def idcolumn()` to solve the following problem:
Creates an id column. This method creates an unbounded text field for platforms that support it. Returns: id column definition
Here is the function:
def idcolumn():
"""
Creates an id column. This method creates an unbounded text field for platforms that support it.
Returns:
id column definition
"""
return String(512).with_variant(Text(), "sqlite", "postgresql") | Creates an id column. This method creates an unbounded text field for platforms that support it. Returns: id column definition |
232 | import os
import urllib.parse
import requests
import streamlit as st
from txtai.pipeline import Summary
class Application:
"""
Main application.
"""
SEARCH_TEMPLATE = "https://en.wikipedia.org/w/api.php?action=opensearch&search=%s&limit=1&namespace=0&format=json"
CONTENT_TEMPLATE = "https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro&explaintext&redirects=1&titles=%s"
def __init__(self):
"""
Creates a new application.
"""
self.summary = Summary("sshleifer/distilbart-cnn-12-6")
def run(self):
"""
Runs a Streamlit application.
"""
st.title("Wikipedia")
st.markdown("This application queries the Wikipedia API and summarizes the top result.")
query = st.text_input("Query")
if query:
query = urllib.parse.quote_plus(query)
data = requests.get(Application.SEARCH_TEMPLATE % query).json()
if data and data[1]:
page = urllib.parse.quote_plus(data[1][0])
content = requests.get(Application.CONTENT_TEMPLATE % page).json()
content = list(content["query"]["pages"].values())[0]["extract"]
st.write(self.summary(content))
st.markdown("*Source: " + data[3][0] + "*")
else:
st.markdown("*No results found*")
The provided code snippet includes necessary dependencies for implementing the `create` function. Write a Python function `def create()` to solve the following problem:
Creates and caches a Streamlit application. Returns: Application
Here is the function:
def create():
"""
Creates and caches a Streamlit application.
Returns:
Application
"""
return Application() | Creates and caches a Streamlit application. Returns: Application |
233 | import argparse
import csv
import json
import os
import pickle
import sqlite3
import time
import psutil
import yaml
import numpy as np
from rank_bm25 import BM25Okapi
from pytrec_eval import RelevanceEvaluator
from elasticsearch import Elasticsearch
from elasticsearch.helpers import bulk
from txtai.embeddings import Embeddings
from txtai.pipeline import Extractor, LLM, Tokenizer
from txtai.scoring import ScoringFactory
def evaluate(methods, path, args):
"""
Runs an evaluation.
Args:
methods: list of indexing methods to test
path: path to dataset
args: command line arguments
Returns:
{calculated performance metrics}
"""
print(f"------ {os.path.basename(path)} ------")
# Performance stats
performance = {}
# Calculate stats for each model type
topk = args.topk
evaluator = RelevanceEvaluator(relevance(path), {f"ndcg_cut.{topk}", f"map_cut.{topk}", f"recall.{topk}", f"P.{topk}"})
for method in methods:
# Stats for this source
stats = {}
performance[method] = stats
# Create index and get results
start = time.time()
output = args.output if args.output else f"{path}/{method}"
index = create(method, path, args.config, output, args.refresh)
# Add indexing metrics
stats["index"] = round(time.time() - start, 2)
stats["memory"] = int(psutil.Process().memory_info().rss / (1024 * 1024))
stats["disk"] = int(sum(d.stat().st_size for d in os.scandir(output) if d.is_file()) / 1024) if os.path.isdir(output) else 0
print("INDEX TIME =", time.time() - start)
print(f"MEMORY USAGE = {stats['memory']} MB")
print(f"DISK USAGE = {stats['disk']} KB")
start = time.time()
results = index(topk)
# Add search metrics
stats["search"] = round(time.time() - start, 2)
print("SEARCH TIME =", time.time() - start)
# Calculate stats
metrics = compute(evaluator.evaluate(results))
# Add accuracy metrics
for stat in [f"ndcg_cut_{topk}", f"map_cut_{topk}", f"recall_{topk}", f"P_{topk}"]:
stats[stat] = metrics[stat]
# Print model stats
print(f"------ {method} ------")
print(f"NDCG@{topk} =", metrics[f"ndcg_cut_{topk}"])
print(f"MAP@{topk} =", metrics[f"map_cut_{topk}"])
print(f"Recall@{topk} =", metrics[f"recall_{topk}"])
print(f"P@{topk} =", metrics[f"P_{topk}"])
print()
return performance
The provided code snippet includes necessary dependencies for implementing the `benchmarks` function. Write a Python function `def benchmarks(args)` to solve the following problem:
Main benchmark execution method. Args: args: command line arguments
Here is the function:
def benchmarks(args):
"""
Main benchmark execution method.
Args:
args: command line arguments
"""
# Directory where BEIR datasets are stored
directory = args.directory if args.directory else "beir"
if args.sources and args.methods:
sources, methods = args.sources.split(","), args.methods.split(",")
mode = "a"
else:
# Default sources and methods
sources = [
"trec-covid",
"nfcorpus",
"nq",
"hotpotqa",
"fiqa",
"arguana",
"webis-touche2020",
"quora",
"dbpedia-entity",
"scidocs",
"fever",
"climate-fever",
"scifact",
]
methods = ["bm25", "embed", "es", "hybrid", "rank", "sqlite"]
mode = "w"
# Run and save benchmarks
with open("benchmarks.json", mode, encoding="utf-8") as f:
for source in sources:
# Run evaluations
results = evaluate(methods, f"{directory}/{source}", args)
# Save as JSON lines output
for method, stats in results.items():
stats["source"] = source
stats["method"] = method
stats["name"] = args.name if args.name else method
json.dump(stats, f)
f.write("\n") | Main benchmark execution method. Args: args: command line arguments |
234 | import glob
import os
import sys
import streamlit as st
from PIL import Image
from txtai.embeddings import Embeddings
class Application:
"""
Main application
"""
def __init__(self, directory):
"""
Creates a new application.
Args:
directory: directory of images
"""
self.embeddings = self.build(directory)
def build(self, directory):
"""
Builds an image embeddings index.
Args:
directory: directory with images
Returns:
Embeddings index
"""
embeddings = Embeddings({"method": "sentence-transformers", "path": "clip-ViT-B-32"})
embeddings.index(self.images(directory))
# Update model to support multilingual queries
embeddings.config["path"] = "sentence-transformers/clip-ViT-B-32-multilingual-v1"
embeddings.model = embeddings.loadvectors()
return embeddings
def images(self, directory):
"""
Generator that loops over each image in a directory.
Args:
directory: directory with images
"""
for path in glob.glob(directory + "/*jpg") + glob.glob(directory + "/*png"):
yield (path, Image.open(path), None)
def run(self):
"""
Runs a Streamlit application.
"""
st.title("Image search")
st.markdown("This application shows how images and text can be embedded into the same space to support similarity search. ")
st.markdown(
"[sentence-transformers](https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications/image-search) "
+ "recently added support for the [OpenAI CLIP model](https://github.com/openai/CLIP). This model embeds text and images into "
+ "the same space, enabling image similarity search. txtai can directly utilize these models."
)
query = st.text_input("Search query:")
if query:
index, _ = self.embeddings.search(query, 1)[0]
st.image(Image.open(index))
The provided code snippet includes necessary dependencies for implementing the `create` function. Write a Python function `def create(directory)` to solve the following problem:
Creates and caches a Streamlit application. Args: directory: directory of images to index Returns: Application
Here is the function:
def create(directory):
"""
Creates and caches a Streamlit application.
Args:
directory: directory of images to index
Returns:
Application
"""
return Application(directory) | Creates and caches a Streamlit application. Args: directory: directory of images to index Returns: Application |
235 | import os
import streamlit as st
from txtai.embeddings import Embeddings
class Application:
"""
Main application.
"""
def __init__(self):
"""
Creates a new application.
"""
# Create embeddings model, backed by sentence-transformers & transformers
self.embeddings = Embeddings({"path": "sentence-transformers/nli-mpnet-base-v2"})
def run(self):
"""
Runs a Streamlit application.
"""
st.title("Similarity Search")
st.markdown("This application runs a basic similarity search that identifies the best matching row for a query.")
data = [
"US tops 5 million confirmed virus cases",
"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
"Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
"The National Park Service warns against sacrificing slower friends in a bear attack",
"Maine man wins $1M from $25 lottery ticket",
"Make huge profits without work, earn up to $100,000 a day",
]
data = st.text_area("Data", value="\n".join(data))
query = st.text_input("Query")
data = data.split("\n")
if query:
# Get index of best section that best matches query
uid = self.embeddings.similarity(query, data)[0][0]
st.write(data[uid])
The provided code snippet includes necessary dependencies for implementing the `create` function. Write a Python function `def create()` to solve the following problem:
Creates and caches a Streamlit application. Returns: Application
Here is the function:
def create():
"""
Creates and caches a Streamlit application.
Returns:
Application
"""
return Application() | Creates and caches a Streamlit application. Returns: Application |
236 | import datetime
import math
import os
import random
import altair as alt
import numpy as np
import pandas as pd
import streamlit as st
from txtai.embeddings import Embeddings
class Application:
"""
Main application.
"""
def __init__(self):
"""
Creates a new application.
"""
# Batting stats
self.batting = Batting()
# Pitching stats
self.pitching = Pitching()
def run(self):
"""
Runs a Streamlit application.
"""
st.title("⚾ Baseball Statistics")
st.markdown(
"""
This application finds the best matching historical players using vector search with [txtai](https://github.com/neuml/txtai).
Raw data is from the [Baseball Databank](https://github.com/chadwickbureau/baseballdatabank) GitHub project. Read [this
article](https://medium.com/neuml/explore-baseball-history-with-vector-search-5778d98d6846) for more details.
"""
)
player, search = st.tabs(["Player", "Search"])
# Player tab
with player:
self.player()
# Search
with search:
self.search()
def player(self):
"""
Player tab.
"""
st.markdown("Match by player-season. Each player search defaults to the best season sorted by OPS or Wins Adjusted.")
# Get parameters
params = self.params()
# Category and stats
category = self.category(params.get("category"), "category")
stats = self.batting if category == "Batting" else self.pitching
# Player name
name = self.name(stats.names, params.get("name"))
# Player metrics
active, best, metrics = stats.metrics(name)
# Player year
year = self.year(active, params.get("year"), best)
# Display metrics chart
if len(active) > 1:
self.chart(category, metrics)
# Run search
results = stats.search(name, year)
# Display results
self.table(results, ["link", "nameFirst", "nameLast", "teamID"] + stats.columns[1:])
# Save parameters
st.experimental_set_query_params(category=category, name=name, year=year)
def search(self):
"""
Stats search tab.
"""
st.markdown("Find players with similar statistics.")
category = self.category("Batting", "searchcategory")
with st.form("search"):
if category == "Batting":
stats, columns = self.batting, self.batting.columns[:-6]
elif category == "Pitching":
stats, columns = self.pitching, self.pitching.columns[:-2]
# Enter stats with data editor
inputs = st.data_editor(pd.DataFrame([dict((column, None) for column in columns)]), hide_index=True).astype(float)
submitted = st.form_submit_button("Search")
if submitted:
# Run search
results = stats.search(row=inputs.to_dict(orient="records")[0])
# Display table
self.table(results, ["link", "nameFirst", "nameLast", "teamID"] + stats.columns[1:])
def params(self):
"""
Get application parameters. This method combines URL parameters with session parameters.
Returns:
parameters
"""
# Get parameters
params = st.experimental_get_query_params()
params = {x: params[x][0] for x in params}
# Sync parameters with session state
if all(x in st.session_state for x in ["category", "name", "year"]):
# Copy session year if category and name are unchanged
params["year"] = str(st.session_state["year"]) if all(params.get(x) == st.session_state[x] for x in ["category", "name"]) else None
# Copy category and name from session state
params["category"] = st.session_state["category"]
params["name"] = st.session_state["name"]
return params
def category(self, category, key):
"""
Builds category input widget.
Args:
category: category parameter
key: widget key
Returns:
category component
"""
# List of stat categories
categories = ["Batting", "Pitching"]
# Get category parameter, default if not available or valid
default = categories.index(category) if category and category in categories else 0
# Radio box component
return st.radio("Stat", categories, index=default, horizontal=True, key=key)
def name(self, names, name):
"""
Builds name input widget.
Args:
names: list of all allowable names
Returns:
name component
"""
# Get name parameter, default to random weighted value if not valid
name = name if name and name in names else random.choices(list(names.keys()), weights=[names[x][1] for x in names])[0]
# Sort names for display
names = sorted(names)
# Select box component
return st.selectbox("Name", names, names.index(name), key="name")
def year(self, years, year, best):
"""
Builds year input widget.
Args:
years: active years for a player
year: year parameter
best: default to best year if year is invalid
Returns:
year component
"""
# Get year parameter, default if not available or valid
year = int(year) if year and year.isdigit() and int(year) in years else best
# Slider component
return int(st.select_slider("Year", years, year, key="year") if len(years) > 1 else years[0])
def chart(self, category, metrics):
"""
Displays a metric chart.
Args:
category: Batting or Pitching
metrics: player metrics to plot
"""
# Key metric
metric = self.batting.metric() if category == "Batting" else self.pitching.metric()
# Cast year to string
metrics["yearID"] = metrics["yearID"].astype(str)
# Metric over years
chart = (
alt.Chart(metrics)
.mark_line(interpolate="monotone", point=True, strokeWidth=2.5, opacity=0.75)
.encode(x=alt.X("yearID", title=""), y=alt.Y(metric, scale=alt.Scale(zero=False)))
)
# Create metric median rule line
rule = alt.Chart(metrics).mark_rule(color="gray", strokeDash=[3, 5], opacity=0.5).encode(y=f"median({metric})")
# Layered chart configuration
chart = (chart + rule).encode(y=alt.Y(title=metric)).properties(height=200).configure_axis(grid=False)
# Draw chart
st.altair_chart(chart + rule, theme="streamlit", use_container_width=True)
def table(self, results, columns):
"""
Displays a list of results as a table.
Args:
results: list of results
columns: column names
"""
if results:
st.dataframe(
results,
column_order=columns,
column_config={
"link": st.column_config.LinkColumn("Link", width="small"),
"yearID": st.column_config.NumberColumn("Year", format="%d"),
"nameFirst": "First",
"nameLast": "Last",
"teamID": "Team",
"age": "Age",
"weight": "Weight",
"height": "Height",
},
)
else:
st.write("Player-Year not found")
The provided code snippet includes necessary dependencies for implementing the `create` function. Write a Python function `def create()` to solve the following problem:
Creates and caches a Streamlit application. Returns: Application
Here is the function:
def create():
"""
Creates and caches a Streamlit application.
Returns:
Application
"""
return Application() | Creates and caches a Streamlit application. Returns: Application |
237 | import os
import streamlit as st
from txtai.pipeline import Summary, Textractor
from txtai.workflow import UrlTask, Task, Workflow
class Application:
"""
Main application.
"""
def __init__(self):
"""
Creates a new application.
"""
textract = Textractor(paragraphs=True, minlength=100, join=True)
summary = Summary("sshleifer/distilbart-cnn-12-6")
self.workflow = Workflow([UrlTask(textract), Task(summary)])
def run(self):
"""
Runs a Streamlit application.
"""
st.title("Article Summary")
st.markdown("This application builds a summary of an article.")
url = st.text_input("URL")
if url:
# Run workflow and get summary
summary = list(self.workflow([url]))[0]
# Write results
st.write(summary)
st.markdown("*Source: " + url + "*")
The provided code snippet includes necessary dependencies for implementing the `create` function. Write a Python function `def create()` to solve the following problem:
Creates and caches a Streamlit application. Returns: Application
Here is the function:
def create():
"""
Creates and caches a Streamlit application.
Returns:
Application
"""
return Application() | Creates and caches a Streamlit application. Returns: Application |
238 | import json
from txtai.api import API
APP = None
The provided code snippet includes necessary dependencies for implementing the `handler` function. Write a Python function `def handler(event, context)` to solve the following problem:
Runs a workflow using input event parameters. Args: event: input event context: input context Returns: Workflow results
Here is the function:
def handler(event, context):
"""
Runs a workflow using input event parameters.
Args:
event: input event
context: input context
Returns:
Workflow results
"""
# Create (or get) global app instance
global APP
APP = APP if APP else API("config.yml")
# Get parameters from event body
event = json.loads(event["body"])
# Run workflow and return results
return {"statusCode": 200, "headers": {"Content-Type": "application/json"}, "body": list(APP.workflow(event["name"], event["elements"]))} | Runs a workflow using input event parameters. Args: event: input event context: input context Returns: Workflow results |
239 | import fire
import sys
from human_eval.data import HUMAN_EVAL
from human_eval.evaluation import evaluate_functional_correctness
HUMAN_EVAL = os.path.join(ROOT, "..", "data", "HumanEval.jsonl.gz")
def evaluate_functional_correctness(
sample_file: str,
k: List[int] = [1, 10, 100],
n_workers: int = 4,
timeout: float = 3.0,
problem_file: str = HUMAN_EVAL,
):
"""
Evaluates the functional correctness of generated samples, and writes
results to f"{sample_file}_results.jsonl.gz"
"""
problems = read_problems(problem_file)
# Check the generated samples against test suites.
with ThreadPoolExecutor(max_workers=n_workers) as executor:
futures = []
completion_id = Counter()
n_samples = 0
results = defaultdict(list)
print("Reading samples...")
for sample in tqdm.tqdm(stream_jsonl(sample_file)):
task_id = sample["task_id"]
completion = sample["completion"]
args = (problems[task_id], completion, timeout, completion_id[task_id])
future = executor.submit(check_correctness, *args)
futures.append(future)
completion_id[task_id] += 1
n_samples += 1
assert len(completion_id) == len(problems), "Some problems are not attempted."
print("Running test suites...")
for future in tqdm.tqdm(as_completed(futures), total=len(futures)):
result = future.result()
results[result["task_id"]].append((result["completion_id"], result))
# Calculate pass@k.
total, correct = [], []
for result in results.values():
result.sort()
passed = [r[1]["passed"] for r in result]
total.append(len(passed))
correct.append(sum(passed))
total = np.array(total)
correct = np.array(correct)
ks = k
pass_at_k = {f"pass@{k}": estimate_pass_at_k(total, correct, k).mean()
for k in ks if (total >= k).all()}
# Finally, save the results in one file:
def combine_results():
for sample in stream_jsonl(sample_file):
task_id = sample["task_id"]
result = results[task_id].pop(0)
sample["result"] = result[1]["result"]
sample["passed"] = result[1]["passed"]
yield sample
out_file = sample_file + "_results.jsonl"
print(f"Writing results to {out_file}...")
write_jsonl(out_file, tqdm.tqdm(combine_results(), total=n_samples))
return pass_at_k
The provided code snippet includes necessary dependencies for implementing the `entry_point` function. Write a Python function `def entry_point( sample_file: str, k: str = "1,10,100", n_workers: int = 4, timeout: float = 3.0, problem_file: str = HUMAN_EVAL, )` to solve the following problem:
Evaluates the functional correctness of generated samples, and writes results to f"{sample_file}_results.jsonl.gz"
Here is the function:
def entry_point(
sample_file: str,
k: str = "1,10,100",
n_workers: int = 4,
timeout: float = 3.0,
problem_file: str = HUMAN_EVAL,
):
"""
Evaluates the functional correctness of generated samples, and writes
results to f"{sample_file}_results.jsonl.gz"
"""
k = list(map(int, k.split(",")))
results = evaluate_functional_correctness(sample_file, k, n_workers, timeout, problem_file)
print(results) | Evaluates the functional correctness of generated samples, and writes results to f"{sample_file}_results.jsonl.gz" |
240 | from setuptools import setup, find_namespace_packages
def read_file(fname):
with open(fname, "r", encoding="utf-8") as f:
return f.read() | null |
241 | import argparse
import json
import re
from warnings import warn
from pyspark.sql import SparkSession
from pyspark.sql.types import DateType, FloatType, StructField, StructType
from merlion.spark.dataset import create_hier_dataset, read_dataset, write_dataset, TSID_COL_NAME
from merlion.spark.pandas_udf import forecast, reconciliation
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--train_data", required=True, help="Path at which the train data is stored.")
parser.add_argument("--output_path", required=True, help="Path at which to save output forecasts.")
parser.add_argument(
"--time_stamps",
required=True,
help='JSON list of times we want to forecast, e.g. \'["2022-01-01 00:00:00", "2020-01-01 00:01:00"]\'.',
)
parser.add_argument("--target_col", required=True, help="Name of the column whose value we want to forecast.")
parser.add_argument(
"--predict_on_train", action="store_true", help="Whether to return the model's prediction on the training data."
)
parser.add_argument("--file_format", default="csv", help="File format of train data & output file.")
parser.add_argument(
"--model",
default=json.dumps({"name": "DefaultForecaster"}),
help="JSON dict specifying the model we wish to use for forecasting.",
)
parser.add_argument(
"--index_cols",
default="[]",
help="JSON list of columns used to demarcate different time series. For example, if the dataset contains sales "
'for multiple items at different stores, this could be \'["store", "item"]\'. '
"If not given, we assume the dataset contains only 1 time series.",
)
parser.add_argument(
"--hierarchical",
action="store_true",
default=False,
help="Whether the time series have a hierarchical structure. If true, we aggregate the time series in the "
"dataset (by summation), in the order specified by index_cols. For example, if index_cols is "
'\'["store", "item"]\', we first sum the sales of all items within store, and then sum the global '
"sales of all stores and all items.",
)
parser.add_argument(
"--agg_dict",
default="{}",
help="JSON dict indicating how different data columns should be aggregated if working with hierarchical time "
"series. Keys are column names, values are names of standard aggregations (e.g. sum, mean, max, etc.). "
"If a column is not specified, it is not aggregated. Note that we always sum the target column, regardless of "
"whether it is specified. This ensures that hierarchical time series reconciliation works correctly.",
)
parser.add_argument(
"--time_col",
default=None,
help="Name of the column containing timestamps. We use the first non-index column if one is not given.",
)
parser.add_argument(
"--data_cols",
default=None,
help="JSON list of columns to use when modeling the data."
"If not given, we do univariate forecasting using only target_col.",
)
args = parser.parse_args()
# Parse time_stamps JSON string
try:
time_stamps = json.loads(re.sub("'", '"', args.time_stamps))
assert isinstance(time_stamps, list) and len(time_stamps) > 0
except (json.decoder.JSONDecodeError, AssertionError) as e:
parser.error(
f"Expected --time_stamps to be a non-empty JSON list. Got {args.time_stamps}.\n Caught {type(e).__name__}({e})"
)
else:
args.time_stamps = time_stamps
# Parse index_cols JSON string
try:
index_cols = json.loads(re.sub("'", '"', args.index_cols)) or []
assert isinstance(index_cols, list)
except (json.decoder.JSONDecodeError, AssertionError) as e:
parser.error(
f"Expected --index_cols to be a JSON list. Got {args.index_cols}.\n Caught {type(e).__name__}({e})"
)
else:
args.index_cols = index_cols
# Parse agg_dict JSON string
try:
agg_dict = json.loads(re.sub("'", '"', args.agg_dict)) or {}
assert isinstance(agg_dict, dict)
except (json.decoder.JSONDecodeError, AssertionError) as e:
parser.error(f"Expected --agg_dict to be a JSON dict. Got {args.agg_dict}.\n Caught {type(e).__name__}({e})")
else:
if args.target_col not in agg_dict:
agg_dict[args.target_col] = "sum"
elif agg_dict[args.target_col] != "sum":
warn(
f'Expected the agg_dict to specify "sum" for target_col {args.target_col}, '
f'but got {agg_dict[args.target_col]}. Manually changing to "sum".'
)
agg_dict[args.target_col] = "sum"
args.agg_dict = agg_dict
# Set default data_cols if needed & make sure target_col is in data_cols
if args.data_cols is None:
args.data_cols = [args.target_col]
else:
try:
data_cols = json.loads(re.sub("'", '"', args.data_cols))
assert isinstance(data_cols, list)
except (json.decoder.JSONDecodeError, AssertionError) as e:
parser.error(
f"Expected --data_cols to be a JSON list if given. Got {args.data_cols}.\n"
f"Caught {type(e).__name__}({e})"
)
else:
args.data_cols = data_cols
if args.target_col not in args.data_cols:
parser.error(f"Expected --data_cols {args.data_cols} to contain --target_col {args.target_col}.")
# Parse JSON string for the model and set the model's target_seq_index
try:
model = json.loads(re.sub("'", '"', args.model))
assert isinstance(model, dict)
except (json.decoder.JSONDecodeError, AssertionError) as e:
parser.error(
f"Expected --model to be a JSON dict specifying a Merlion model. Got {args.model}.\n"
f"Caught {type(e).__name__}({e})"
)
else:
target_seq_index = {v: i for i, v in enumerate(args.data_cols)}[args.target_col]
model["target_seq_index"] = target_seq_index
args.model = model
# Only do hierarchical forecasting if there are index columns specifying a hierarchy
args.hierarchical = args.hierarchical and len(args.index_cols) > 0
return args | null |
242 | import argparse
import json
import re
from pyspark.sql import SparkSession
from pyspark.sql.types import DateType, FloatType, StructField, StructType
from merlion.spark.dataset import read_dataset, write_dataset, TSID_COL_NAME
from merlion.spark.pandas_udf import anomaly
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--data", required=True, help="Path at which the dataset is stored.")
parser.add_argument("--output_path", required=True, help="Path at which to save output anomaly scores.")
parser.add_argument(
"--train_test_split", required=True, help="First timestamp in the dataset which should be used for testing."
)
parser.add_argument("--file_format", default="csv", help="File format of train data & output file.")
parser.add_argument(
"--model",
default=json.dumps({"name": "DefaultDetector"}),
help="JSON dict specifying the model we wish to use for anomaly detection.",
)
parser.add_argument(
"--index_cols",
default="[]",
help="JSON list of columns used to demarcate different time series. For example, if the dataset contains sales "
'for multiple items at different stores, this could be \'["store", "item"]\'. '
"If not given, we assume the dataset contains only 1 time series.",
)
parser.add_argument(
"--time_col",
default=None,
help="Name of the column containing timestamps. If not given, use the first non-index column.",
)
parser.add_argument(
"--data_cols",
default="[]",
help="JSON list of columns to use when modeling the data. If not given, use all non-index, non-time columns.",
)
parser.add_argument(
"--predict_on_train", action="store_true", help="Whether to return the model's prediction on the training data."
)
args = parser.parse_args()
# Parse index_cols JSON string
try:
index_cols = json.loads(re.sub("'", '"', args.index_cols))
assert isinstance(index_cols, list)
except (json.decoder.JSONDecodeError, AssertionError) as e:
parser.error(
f"Expected --index_cols to be a JSON list. Got {args.index_cols}.\n" f"Caught {type(e).__name__}({e})"
)
else:
args.index_cols = index_cols
# Parse data_cols JSON string
try:
data_cols = json.loads(re.sub("'", '"', args.data_cols))
assert isinstance(data_cols, list)
except (json.decoder.JSONDecodeError, AssertionError) as e:
parser.error(
f"Expected --data_cols to be a JSON list if given. Got {args.data_cols}.\n"
f"Caught {type(e).__name__}({e})"
)
else:
args.data_cols = data_cols
# Parse JSON string for the model and set the model's target_seq_index
try:
model = json.loads(re.sub("'", '"', args.model))
assert isinstance(model, dict)
except (json.decoder.JSONDecodeError, AssertionError) as e:
parser.error(
f"Expected --model to be a JSON dict specifying a Merlion model. Got {args.model}.\n"
f"Caught {type(e).__name__}({e})"
)
else:
args.model = model
return args | null |
243 | import os
import re
import shutil
from bs4 import BeautifulSoup as bs
from git import Repo
def create_version_dl(soup, prefix, current_version, all_versions):
dl = soup.new_tag("dl")
dt = soup.new_tag("dt")
dt.string = "Versions"
dl.append(dt)
for version in all_versions:
# Create the href for this version & bold it if it's the current version
href = soup.new_tag("a", href=f"{prefix}/{version}/index.html")
href.string = version
if version == current_version:
strong = soup.new_tag("strong")
strong.append(href)
href = strong
# Create a list item & add it to the dl
dd = soup.new_tag("dd")
dd.append(href)
dl.append(dd)
return dl | null |
244 | import argparse
from collections import OrderedDict
import glob
import json
import logging
import math
import os
import re
import sys
import git
from typing import Dict, List
import numpy as np
import pandas as pd
import tqdm
from merlion.evaluate.forecast import ForecastEvaluator, ForecastMetric, ForecastEvaluatorConfig
from merlion.models.ensemble.combine import CombinerBase, Mean, ModelSelector, MetricWeightedMean
from merlion.models.ensemble.forecast import ForecasterEnsembleConfig, ForecasterEnsemble
from merlion.models.factory import ModelFactory
from merlion.models.forecast.base import ForecasterBase
from merlion.transform.resample import TemporalResample, granularity_str_to_seconds
from merlion.utils import TimeSeries, UnivariateTimeSeries
from merlion.utils.resample import infer_granularity, to_pd_datetime
from ts_datasets.base import BaseDataset
from ts_datasets.forecast import *
import matplotlib.pyplot as plt
CONFIG_JSON = os.path.join(MERLION_ROOT, "conf", "benchmark_forecast.json")
def parse_args():
with open(CONFIG_JSON, "r") as f:
valid_models = list(json.load(f).keys())
parser = argparse.ArgumentParser(
description="Script to benchmark various Merlion forecasting models on "
"univariate forecasting task. This file assumes that "
"you have pip installed both merlion (this repo's main "
"package) and ts_datasets (a sub-repo)."
)
parser.add_argument(
"--dataset",
default="M4_Hourly",
help="Name of dataset to run benchmark on. See get_dataset() "
"in ts_datasets/ts_datasets/forecast/__init__.py for "
"valid options.",
)
parser.add_argument("--data_root", default=None, help="Root directory/file of dataset.")
parser.add_argument("--data_kwargs", default="{}", help="JSON of keyword arguments for the data loader.")
parser.add_argument(
"--models",
type=str,
nargs="*",
default=None,
help="Name of forecasting model to benchmark.",
choices=valid_models,
)
parser.add_argument(
"--hash",
type=str,
default=None,
help="Unique identifier for the output file. Can be useful "
"if doing multiple runs with the same model but different "
"hyperparameters.",
)
parser.add_argument(
"--ensemble_type",
type=str,
default="selector",
help="How to combine multiple models in an ensemble",
choices=["mean", "err_weighted_mean", "selector"],
)
parser.add_argument(
"--retrain_type",
type=str,
default="without_retrain",
help="Name of retrain type, should be one of the three "
"types, without_retrain, sliding_window_retrain"
"or expanding_window_retrain.",
choices=["without_retrain", "sliding_window_retrain", "expanding_window_retrain"],
)
parser.add_argument("--n_retrain", type=int, default=0, help="Specify the number of retrain times.")
parser.add_argument(
"--load_checkpoint",
action="store_true",
default=False,
help="Specify this option if you would like continue "
"training your model on a dataset from a "
"checkpoint, instead of restarting from scratch.",
)
parser.add_argument("--debug", action="store_true", default=False, help="Whether to set logging level to debug.")
parser.add_argument(
"--visualize",
action="store_true",
default=False,
help="Whether to plot the model's predictions after "
"training on each example. Mutually exclusive "
"with running any sort of evaluation.",
)
parser.add_argument(
"--summarize",
action="store_true",
default=False,
help="Specify this option if you want to summarize "
"all results for a particular dataset. Note "
"that this option only summarizes the results "
"that have already been computed! It does not "
"run any algorithms, aside from the one(s) given "
"to --models (if any).",
)
args = parser.parse_args()
args.data_kwargs = json.loads(args.data_kwargs)
assert isinstance(args.data_kwargs, dict)
# If not summarizing all results, we need at least one model to evaluate
if args.summarize and args.models is None:
args.models = []
elif not args.summarize:
if args.models is None:
args.models = ["ARIMA"]
elif len(args.models) == 0:
parser.error("At least one model required if --summarize not given")
return args | null |
245 | import argparse
from collections import OrderedDict
import glob
import json
import logging
import math
import os
import re
import sys
import git
from typing import Dict, List
import numpy as np
import pandas as pd
import tqdm
from merlion.evaluate.forecast import ForecastEvaluator, ForecastMetric, ForecastEvaluatorConfig
from merlion.models.ensemble.combine import CombinerBase, Mean, ModelSelector, MetricWeightedMean
from merlion.models.ensemble.forecast import ForecasterEnsembleConfig, ForecasterEnsemble
from merlion.models.factory import ModelFactory
from merlion.models.forecast.base import ForecasterBase
from merlion.transform.resample import TemporalResample, granularity_str_to_seconds
from merlion.utils import TimeSeries, UnivariateTimeSeries
from merlion.utils.resample import infer_granularity, to_pd_datetime
from ts_datasets.base import BaseDataset
from ts_datasets.forecast import *
import matplotlib.pyplot as plt
logger = logging.getLogger(__name__)
MERLION_ROOT = os.path.dirname(os.path.abspath(__file__))
def get_dataset_name(dataset: BaseDataset) -> str:
name = type(dataset).__name__
if hasattr(dataset, "subset") and dataset.subset is not None:
name += "_" + dataset.subset
if isinstance(dataset, CustomDataset):
root = dataset.rootdir
name = os.path.join(name, os.path.basename(os.path.dirname(root) if os.path.isfile(root) else root))
return name
def resolve_model_name(model_name: str):
with open(CONFIG_JSON, "r") as f:
config_dict = json.load(f)
if model_name not in config_dict:
raise NotImplementedError(
f"Benchmarking not implemented for model {model_name}. Valid model names are {list(config_dict.keys())}"
)
while "alias" in config_dict[model_name]:
model_name = config_dict[model_name]["alias"]
return model_name
def get_model(model_name: str, dataset: BaseDataset, **kwargs) -> ForecasterBase:
"""Gets the model, configured for the specified dataset."""
with open(CONFIG_JSON, "r") as f:
config_dict = json.load(f)
if model_name not in config_dict:
raise NotImplementedError(
f"Benchmarking not implemented for model {model_name}. Valid model names are {list(config_dict.keys())}"
)
while "alias" in config_dict[model_name]:
model_name = config_dict[model_name]["alias"]
# Load the model with default kwargs, but override with dataset-specific
# kwargs where relevant, as well as manual kwargs
model_configs = config_dict[model_name]["config"]
model_type = config_dict[model_name].get("model_type", model_name)
model_kwargs = model_configs["default"]
model_kwargs.update(model_configs.get(type(dataset).__name__, {}))
model_kwargs.update(kwargs)
# Override the transform with Identity
if "transform" in model_kwargs:
logger.warning(
f"Data pre-processing transforms currently not "
f"supported for forecasting. Ignoring "
f"transform {model_kwargs['transform']} and "
f"using Identity instead."
)
model_kwargs["transform"] = TemporalResample(
granularity=None, aggregation_policy="Mean", missing_value_policy="FFill"
)
return ModelFactory.create(name=model_type, **model_kwargs)
def get_combiner(ensemble_type: str) -> CombinerBase:
if ensemble_type == "mean":
return Mean(abs_score=False)
elif ensemble_type == "selector":
return ModelSelector(metric=ForecastMetric.sMAPE)
elif ensemble_type == "err_weighted_mean":
return MetricWeightedMean(metric=ForecastMetric.sMAPE)
else:
raise KeyError(f"ensemble_type {ensemble_type} not supported.")
def get_dirname(model_names: List[str], ensemble_type: str) -> str:
dirname = "+".join(sorted(model_names))
if len(model_names) > 1:
dirname += "_" + ensemble_type
return dirname
def get_code_version_info():
return dict(time=str(pd.Timestamp.now()), commit=git.Repo(search_parent_directories=True).head.object.hexsha)
def plot_unrolled_compare(train_vals, test_vals, train_pred, test_pred, outputpath, title):
truth_pd = (train_vals + test_vals).to_pd()
truth_pd.columns = ["ground_truth"]
pred_pd = (train_pred + test_pred).to_pd()
pred_pd.columns = ["prediction"]
result_pd = pd.concat([truth_pd, pred_pd], axis=1)
plt.figure()
plt.rcParams["savefig.dpi"] = 500
plt.rcParams["figure.dpi"] = 500
result_pd.plot(linewidth=0.5)
plt.axvline(train_vals.to_pd().index[-1], color="r")
plt.title(title)
plt.savefig(outputpath)
plt.clf()
class ForecastMetric(Enum):
"""
Enumeration of evaluation metrics for time series forecasting. For each value,
the name is the metric, and the value is a partial function of form
``f(ground_truth, predict, **kwargs)``. Here, ``ground_truth`` is the
original time series, and ``predict`` is the result returned by a
`ForecastEvaluator`.
"""
MAE = partial(accumulate_forecast_score, metric=ForecastScoreAccumulator.mae)
"""
Mean Absolute Error (MAE) is formulated as:
.. math::
\\frac{1}{T}\\sum_{t=1}^T{(|y_t - \\hat{y}_t|)}.
"""
MARRE = partial(accumulate_forecast_score, metric=ForecastScoreAccumulator.marre)
"""
Mean Absolute Ranged Relative Error (MARRE) is formulated as:
.. math::
100 \\cdot \\frac{1}{T} \\sum_{t=1}^{T} {\\left| \\frac{y_t
- \\hat{y}_t} {\\max_t{y_t} - \\min_t{y_t}} \\right|}.
"""
RMSE = partial(accumulate_forecast_score, metric=ForecastScoreAccumulator.rmse)
"""
Root Mean Squared Error (RMSE) is formulated as:
.. math::
\\sqrt{\\frac{1}{T}\\sum_{t=1}^T{(y_t - \\hat{y}_t)^2}}.
"""
sMAPE = partial(accumulate_forecast_score, metric=ForecastScoreAccumulator.smape)
"""
symmetric Mean Absolute Percentage Error (sMAPE) is formulated as:
.. math::
200 \\cdot \\frac{1}{T}\\sum_{t=1}^{T}{\\frac{\\left| y_t
- \\hat{y}_t \\right|}{\\left| y_t \\right| + \\left| \\hat{y}_t \\right|}}.
"""
RMSPE = partial(accumulate_forecast_score, metric=ForecastScoreAccumulator.rmspe)
"""
Root Mean Square Percent Error is formulated as:
.. math:: 100 \\cdot \\sqrt{\\frac{1}{T}\\sum_{t=1}^T\\frac{(y_t - \\hat{y}_t)}{y_t}^2}.
"""
MASE = partial(accumulate_forecast_score, metric=ForecastScoreAccumulator.mase)
"""
Mean Absolute Scaled Error (MASE) is formulated as:
.. math::
\\frac{1}{T}\\cdot\\frac{\\sum_{t=1}^{T}\\left| y_t
- \\hat{y}_t \\right|}{\\frac{1}{N-m}\\sum_{t=m+1}^{N}\\left| x_t - x_{t-m} \\right|}.
"""
MSIS = partial(accumulate_forecast_score, metric=ForecastScoreAccumulator.msis)
"""
Mean Scaled Interval Score (MSIS) is formulated as:
.. math::
\\frac{1}{T}\\cdot\\frac{\\sum_{t=1}^{T} (U_t - L_t) + 100 \\cdot (L_t - y_t)[y_t<L_t]
+ 100\\cdot(y_t - U_t)[y_t > U_t]}{\\frac{1}{N-m}\\sum_{t=m+1}^{N}\\left| x_t - x_{t-m} \\right|}.
"""
class ForecastEvaluatorConfig(EvaluatorConfig):
"""
Configuration class for a `ForecastEvaluator`
"""
_timedelta_keys = EvaluatorConfig._timedelta_keys + ["horizon"]
def __init__(self, horizon: float = None, **kwargs):
"""
:param horizon: the model's prediction horizon. Whenever the model makes
a prediction, it will predict ``horizon`` seconds into the future.
"""
super().__init__(**kwargs)
self.horizon = horizon
def horizon(self) -> Union[pd.Timedelta, pd.DateOffset, None]:
"""
:return: the horizon our model is predicting into the future. Defaults to the retraining frequency.
"""
if self._horizon is None:
return self.retrain_freq
return self._horizon
def horizon(self, horizon):
self._horizon = to_offset(horizon)
def cadence(self) -> Union[pd.Timedelta, pd.DateOffset, None]:
"""
:return: the cadence at which we are having our model produce new predictions. Defaults to the predictive
horizon if there is one, and the retraining frequency otherwise.
"""
if self._cadence is None:
return self.horizon
return self._cadence
def cadence(self, cadence):
self._cadence = to_offset(cadence)
class ForecastEvaluator(EvaluatorBase):
"""
Simulates the live deployment of an forecaster model.
"""
config_class = ForecastEvaluatorConfig
def __init__(self, model, config):
assert isinstance(model, ForecasterBase)
super().__init__(model=model, config=config)
def horizon(self):
return self.config.horizon
def cadence(self):
return self.config.cadence
def _call_model(
self,
time_series: TimeSeries,
time_series_prev: TimeSeries,
exog_data: TimeSeries = None,
return_err: bool = False,
) -> Union[Tuple[TimeSeries, TimeSeries], TimeSeries]:
if self.model.target_seq_index is not None:
name = time_series.names[self.model.target_seq_index]
time_stamps = time_series.univariates[name].time_stamps
else:
time_stamps = time_series.time_stamps
forecast, err = self.model.forecast(
time_stamps=time_stamps, time_series_prev=time_series_prev, exog_data=exog_data
)
return (forecast, err) if return_err else forecast
def evaluate(
self,
ground_truth: TimeSeries,
predict: Union[TimeSeries, List[TimeSeries]],
metric: ForecastMetric = ForecastMetric.sMAPE,
):
"""
:param ground_truth: the series of test data
:param predict: the series of predicted values
:param metric: the evaluation metric.
"""
if self.model.target_seq_index is not None:
name = ground_truth.names[self.model.target_seq_index]
ground_truth = ground_truth.univariates[name].to_ts()
if isinstance(predict, TimeSeries):
if metric is not None:
return metric.value(ground_truth, predict)
return accumulate_forecast_score(ground_truth, predict)
else:
if metric is not None:
weights = np.asarray([len(p) for p in predict if not p.is_empty()])
vals = [metric.value(ground_truth, p) for p in predict if not p.is_empty()]
return np.dot(weights / weights.sum(), vals)
return [accumulate_forecast_score(ground_truth, p) for p in predict if not p.is_empty()]
class ForecasterEnsembleConfig(ForecasterExogConfig, EnsembleConfig):
"""
Config class for an ensemble of forecasters.
"""
_default_combiner = Mean(abs_score=False)
def __init__(self, max_forecast_steps=None, target_seq_index=None, verbose=False, **kwargs):
self.verbose = verbose
super().__init__(max_forecast_steps=max_forecast_steps, target_seq_index=None, **kwargs)
# Override the target_seq_index of all individual models after everything has been initialized
# FIXME: doesn't work if models have heterogeneous transforms which change the dim of the input time series
self.target_seq_index = target_seq_index
if self.models is not None:
assert all(model.target_seq_index == self.target_seq_index for model in self.models)
def target_seq_index(self):
return self._target_seq_index
def target_seq_index(self, target_seq_index):
if self.models is not None:
# Get the target_seq_index from the models if None is given
if target_seq_index is None:
non_none_idxs = [m.target_seq_index for m in self.models if m.target_seq_index is not None]
if len(non_none_idxs) > 0:
target_seq_index = non_none_idxs[0]
assert all(m.target_seq_index in [None, target_seq_index] for m in self.models), (
f"Attempted to infer target_seq_index from the individual models in the ensemble, but "
f"not all models have the same target_seq_index. Got {[m.target_seq_index for m in self.models]}"
)
# Only override the target_seq_index from the models if there is one
if target_seq_index is not None:
for model in self.models:
model.config.target_seq_index = target_seq_index
# Save the ensemble-level target_seq_index as a private variable
self._target_seq_index = target_seq_index
class ForecasterEnsemble(EnsembleBase, ForecasterExogBase):
"""
Class representing an ensemble of multiple forecasting models.
"""
models: List[ForecasterBase]
config_class = ForecasterEnsembleConfig
def _default_train_config(self):
return EnsembleTrainConfig(valid_frac=0.2)
def require_even_sampling(self) -> bool:
return False
def __init__(self, config: ForecasterEnsembleConfig = None, models: List[ForecasterBase] = None):
super().__init__(config=config, models=models)
for model in self.models:
assert isinstance(
model, ForecasterBase
), f"Expected all models in {type(self).__name__} to be forecasters, but got a {type(model).__name__}."
model.config.invert_transform = True
def train_pre_process(
self, train_data: TimeSeries, exog_data: TimeSeries = None, return_exog=None
) -> Union[TimeSeries, Tuple[TimeSeries, Union[TimeSeries, None]]]:
idxs = [model.target_seq_index for model in self.models]
if any(i is not None for i in idxs):
self.config.target_seq_index = [i for i in idxs if i is not None][0]
assert all(i in [None, self.target_seq_index] for i in idxs), (
f"All individual forecasters must have the same target_seq_index "
f"to be used in a ForecasterEnsemble, but got the following "
f"target_seq_idx values: {idxs}"
)
return super().train_pre_process(train_data=train_data, exog_data=exog_data, return_exog=return_exog)
def resample_time_stamps(self, time_stamps: Union[int, List[int]], time_series_prev: TimeSeries = None):
return time_stamps
def train_combiner(self, all_model_outs: List[TimeSeries], target: TimeSeries, **kwargs) -> TimeSeries:
return super().train_combiner(all_model_outs, target, target_seq_index=self.target_seq_index, **kwargs)
def _train_with_exog(
self, train_data: TimeSeries, train_config: EnsembleTrainConfig = None, exog_data: TimeSeries = None
) -> Tuple[Optional[TimeSeries], None]:
train, valid = self.train_valid_split(train_data, train_config)
per_model_train_configs = train_config.per_model_train_configs
if per_model_train_configs is None:
per_model_train_configs = [None] * len(self.models)
assert len(per_model_train_configs) == len(self.models), (
f"You must provide the same number of per-model train configs "
f"as models, but received received {len(per_model_train_configs)} "
f"train configs for an ensemble with {len(self.models)} models"
)
# Train individual models on the training data
preds, errs = [], []
eval_cfg = ForecastEvaluatorConfig(retrain_freq=None, horizon=self.get_max_common_horizon(train))
# TODO: parallelize me
for i, (model, cfg) in enumerate(zip(self.models, per_model_train_configs)):
logger.info(f"Training & evaluating model {i+1}/{len(self.models)} ({type(model).__name__})...")
try:
train_kwargs = dict(train_config=cfg)
(train_pred, train_err), pred = ForecastEvaluator(model=model, config=eval_cfg).get_predict(
train_vals=train, test_vals=valid, exog_data=exog_data, train_kwargs=train_kwargs
)
preds.append(train_pred if valid is None else pred)
errs.append(train_err if valid is None else None)
except Exception:
logger.warning(
f"Caught an exception while training model {i+1}/{len(self.models)} ({type(model).__name__}). "
f"Model will not be used. {traceback.format_exc()}"
)
self.combiner.set_model_used(i, False)
preds.append(None)
errs.append(None)
# Train the combiner on the train data if we didn't use validation data.
if valid is None:
pred = self.train_combiner(preds, train_data)
err = None if any(e is None for e in errs) else self.combiner(errs, train_data)
return pred, err
# Otherwise, train the combiner on the validation data, and re-train the models on the full data
self.train_combiner(preds, valid)
full_preds, full_errs = [], []
# TODO: parallelize me
for i, (model, used, cfg) in enumerate(zip(self.models, self.models_used, per_model_train_configs)):
model.reset()
if used:
logger.info(f"Re-training model {i+1}/{len(self.models)} ({type(model).__name__}) on full data...")
pred, err = model.train(train_data, train_config=cfg, exog_data=exog_data)
else:
pred, err = None, None
full_preds.append(pred)
full_errs.append(err)
if any(used and e is None for used, e in zip(self.models_used, full_errs)):
err = None
else:
err = self.combiner(full_errs, train_data)
return self.combiner(full_preds, train_data), err
def _forecast_with_exog(
self,
time_stamps: List[int],
time_series_prev: pd.DataFrame = None,
return_prev=False,
exog_data: pd.DataFrame = None,
exog_data_prev: pd.DataFrame = None,
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
preds, errs = [], []
time_series_prev = TimeSeries.from_pd(time_series_prev)
if exog_data is not None:
exog_data = pd.concat((exog_data_prev, exog_data)) if exog_data_prev is not None else exog_data
exog_data = TimeSeries.from_pd(exog_data)
for model, used in zip(self.models, self.models_used):
if used:
pred, err = model.forecast(
time_stamps=time_stamps,
time_series_prev=time_series_prev,
exog_data=exog_data,
return_prev=return_prev,
)
preds.append(pred)
errs.append(err)
pred = self.combiner(preds, None).to_pd()
err = None if any(e is None for e in errs) else self.combiner(errs, None).to_pd()
return pred, err
class TemporalResample(TransformBase):
"""
Defines a policy to temporally resample a time series at a specified granularity. Note that while this transform
does support inversion, the recovered time series may differ from the input due to information loss when resampling.
"""
def __init__(
self,
granularity: Union[str, int, float] = None,
origin: int = None,
trainable_granularity: bool = None,
remove_non_overlapping=True,
aggregation_policy: Union[str, AggregationPolicy] = "Mean",
missing_value_policy: Union[str, MissingValuePolicy] = "Interpolate",
):
"""
Defines a policy to temporally resample a time series.
:param granularity: The granularity at which we want to resample.
:param origin: The time stamp defining the offset to start at.
:param trainable_granularity: Whether we will automatically infer the granularity of the time series.
If ``None`` (default), it will be trainable only if no granularity is explicitly given.
:param remove_non_overlapping: If ``True``, we will only keep the portions
of the univariates that overlap with each other. For example, if we
have 3 univariates which span timestamps [0, 3600], [60, 3660], and
[30, 3540], we will only keep timestamps in the range [60, 3540]. If
``False``, we will keep all timestamps produced by the resampling.
:param aggregation_policy: The policy we will use to aggregate multiple values in a window (downsampling).
:param missing_value_policy: The policy we will use to impute missing values (upsampling).
"""
super().__init__()
self.granularity = granularity
self.origin = origin
self.trainable_granularity = (granularity is None) if trainable_granularity is None else trainable_granularity
self.remove_non_overlapping = remove_non_overlapping
self.aggregation_policy = aggregation_policy
self.missing_value_policy = missing_value_policy
def requires_inversion_state(self):
return False
def proper_inversion(self):
"""
We treat resampling as a proper inversion to avoid emitting warnings.
"""
return True
def granularity(self):
return self._granularity
def granularity(self, granularity):
if not isinstance(granularity, (int, float)):
try:
granularity = granularity_str_to_seconds(granularity)
except:
granularity = getattr(granularity, "freqstr", granularity)
self._granularity = granularity
def aggregation_policy(self) -> AggregationPolicy:
return self._aggregation_policy
def aggregation_policy(self, agg: Union[str, AggregationPolicy]):
if isinstance(agg, str):
valid = set(AggregationPolicy.__members__.keys())
if agg not in valid:
raise KeyError(f"{agg} is not a valid aggregation policy. Valid aggregation policies are: {valid}")
agg = AggregationPolicy[agg]
self._aggregation_policy = agg
def missing_value_policy(self) -> MissingValuePolicy:
return self._missing_value_policy
def missing_value_policy(self, mv: Union[str, MissingValuePolicy]):
if isinstance(mv, str):
valid = set(MissingValuePolicy.__members__.keys())
if mv not in valid:
raise KeyError(f"{mv} is not a valid missing value policy. Valid aggregation policies are: {valid}")
mv = MissingValuePolicy[mv]
self._missing_value_policy = mv
def train(self, time_series: TimeSeries):
if self.trainable_granularity:
granularity = infer_granularity(time_series.np_time_stamps)
logger.warning(f"Inferred granularity {granularity}")
self.granularity = granularity
if self.trainable_granularity or self.origin is None:
t0, tf = time_series.t0, time_series.tf
if isinstance(self.granularity, (int, float)):
offset = (tf - t0) % self.granularity
else:
offset = 0
self.origin = t0 + offset
def __call__(self, time_series: TimeSeries) -> TimeSeries:
if self.granularity is None:
logger.warning(
f"Skipping resampling step because granularity is "
f"None. Please either specify a granularity or train "
f"this transformation on a time series."
)
return time_series
return time_series.align(
alignment_policy=AlignPolicy.FixedGranularity,
granularity=self.granularity,
origin=self.origin,
remove_non_overlapping=self.remove_non_overlapping,
aggregation_policy=self.aggregation_policy,
missing_value_policy=self.missing_value_policy,
)
def to_pd_datetime(timestamp):
"""
Converts a timestamp (or list/iterable of timestamps) to pandas Datetime, truncated at the millisecond.
"""
if isinstance(timestamp, pd.DatetimeIndex):
return timestamp
elif isinstance(timestamp, (int, float)):
return pd.to_datetime(int(timestamp * 1000), unit="ms")
elif isinstance(timestamp, Iterable) and all(isinstance(t, (int, float)) for t in timestamp):
timestamp = pd.to_datetime(np.asarray(timestamp).astype(float) * 1000, unit="ms")
elif isinstance(timestamp, np.ndarray) and timestamp.dtype in [int, np.float32, np.float64]:
timestamp = pd.to_datetime(np.asarray(timestamp).astype(float) * 1000, unit="ms")
return pd.to_datetime(timestamp)
def infer_granularity(time_stamps, return_offset=False):
"""
Infers the granularity of a list of time stamps.
"""
# See if pandas can infer the granularity on its own
orig_t = to_pd_datetime(time_stamps)
if len(orig_t) > 2:
freq = pd.infer_freq(orig_t)
elif len(orig_t) == 2:
freq = orig_t[1] - orig_t[0]
else:
raise ValueError("Need at least 2 timestamps to infer a granularity.")
offset = pd.to_timedelta(0)
if freq is not None:
freq = pd_to_offset(freq)
return (freq, offset) if return_offset else freq
# Otherwise, start with the most commonly occurring timedelta
dt = pd.to_timedelta(scipy.stats.mode(orig_t[1:] - orig_t[:-1], axis=None)[0].item())
# Check if the data could be sampled at a k-monthly granularity.
candidate_freqs = [dt]
for k in range(math.ceil(dt / pd.Timedelta(days=31)), math.ceil(dt / pd.Timedelta(days=28))):
candidate_freqs.extend([pd_to_offset(f"{k}MS"), pd_to_offset(f"{k}M")])
# Pick the sampling frequency which has the most overlap with the actual timestamps
freq2idx = {f: pd.date_range(start=orig_t[0], end=orig_t[-1], freq=f) for f in candidate_freqs}
freq2offset = {f: get_date_offset(time_stamps=freq2idx[f], reference=orig_t) for f in candidate_freqs}
freq = sorted(freq2idx.keys(), key=lambda f: len((freq2idx[f] + freq2offset[f]).intersection(orig_t)))[-1]
return (freq, freq2offset[freq]) if return_offset else freq
The provided code snippet includes necessary dependencies for implementing the `train_model` function. Write a Python function `def train_model( model_names: List[str], dataset: BaseDataset, ensemble_type: str, csv: str, config_fname: str, retrain_type: str = "without_retrain", n_retrain: int = 10, load_checkpoint: bool = False, visualize: bool = False, )` to solve the following problem:
Trains all the model on the dataset, and evaluates its predictions for every horizon setting on every time series.
Here is the function:
def train_model(
model_names: List[str],
dataset: BaseDataset,
ensemble_type: str,
csv: str,
config_fname: str,
retrain_type: str = "without_retrain",
n_retrain: int = 10,
load_checkpoint: bool = False,
visualize: bool = False,
):
"""
Trains all the model on the dataset, and evaluates its predictions for every
horizon setting on every time series.
"""
model_names = [resolve_model_name(m) for m in model_names]
dirname = get_dirname(model_names, ensemble_type)
dirname = dirname + "_" + retrain_type + str(n_retrain)
results_dir = os.path.join(MERLION_ROOT, "results", "forecast", dirname)
os.makedirs(results_dir, exist_ok=True)
dataset_name = get_dataset_name(dataset)
# Determine where to start within the dataset if there is a checkpoint
if os.path.isfile(csv) and load_checkpoint:
i0 = pd.read_csv(csv).idx.max()
else:
i0 = -1
os.makedirs(os.path.dirname(csv), exist_ok=True)
with open(csv, "w") as f:
f.write("idx,name,horizon,retrain_type,n_retrain,RMSE,sMAPE\n")
model = None
# loop over dataset
is_multivariate_data = dataset[0][0].shape[1] > 1
for i, (df, md) in enumerate(tqdm.tqdm(dataset, desc=f"{dataset_name} Dataset")):
if i <= i0:
continue
trainval = md["trainval"]
# Resample to an appropriate granularity according to metadata
if "granularity" in md:
dt = md["granularity"]
df = df.resample(dt, closed="right", label="right").mean().interpolate()
vals = TimeSeries.from_pd(df)
dt = infer_granularity(vals.time_stamps)
# Get the train/val split
t = trainval.index[np.argmax(~trainval)].value // 1e9
train_vals, test_vals = vals.bisect(t, t_in_left=False)
# Compute train_window_len and test_window_len
train_start_timestamp = train_vals.univariates[train_vals.names[0]].time_stamps[0]
test_start_timestamp = test_vals.univariates[test_vals.names[0]].time_stamps[0]
train_window_len = test_start_timestamp - train_start_timestamp
train_end_timestamp = train_vals.univariates[train_vals.names[0]].time_stamps[-1]
test_end_timestamp = test_vals.univariates[test_vals.names[0]].time_stamps[-1]
test_window_len = test_end_timestamp - train_end_timestamp
# Get all the horizon conditions we want to evaluate from metadata
if any("condition" in k and isinstance(v, list) for k, v in md.items()):
conditions = sum([v for k, v in md.items() if "condition" in k and isinstance(v, list)], [])
logger.debug("\n" + "=" * 80 + "\n" + df.columns[0] + "\n" + "=" * 80 + "\n")
horizons = set()
for condition in conditions:
horizons.update([v for k, v in condition.items() if "horizon" in k])
# For multivariate data, we use a horizon of 3
elif is_multivariate_data:
horizons = [3 * dt]
# For univariate data, we predict the entire test data in batch
else:
horizons = [test_window_len]
# loop over horizon conditions
for horizon in horizons:
horizon = granularity_str_to_seconds(horizon)
try:
max_forecast_steps = int(math.ceil(horizon / dt.total_seconds()))
except:
window = TimeSeries.from_pd(test_vals.to_pd()[: to_pd_datetime(train_end_timestamp + horizon)])
max_forecast_steps = len(TemporalResample(granularity=dt)(window))
logger.debug(f"horizon is {pd.Timedelta(seconds=horizon)} and max_forecast_steps is {max_forecast_steps}")
if retrain_type == "without_retrain":
retrain_freq = None
train_window = None
n_retrain = 0
elif retrain_type == "sliding_window_retrain":
retrain_freq = math.ceil(test_window_len / int(n_retrain))
train_window = train_window_len
horizon = min(retrain_freq, horizon)
elif retrain_type == "expanding_window_retrain":
retrain_freq = math.ceil(test_window_len / int(n_retrain))
train_window = None
horizon = min(retrain_freq, horizon)
else:
raise ValueError(
"the retrain_type should be without_retrain, sliding_window_retrain or expanding_window_retrain"
)
# Get Model
models = [get_model(m, dataset, max_forecast_steps=max_forecast_steps) for m in model_names]
if len(models) == 1:
model = models[0]
else:
config = ForecasterEnsembleConfig(combiner=get_combiner(ensemble_type))
model = ForecasterEnsemble(config=config, models=models)
evaluator = ForecastEvaluator(
model=model,
config=ForecastEvaluatorConfig(train_window=train_window, horizon=horizon, retrain_freq=retrain_freq),
)
# Get Evaluate Results
train_result, test_pred = evaluator.get_predict(train_vals=train_vals, test_vals=test_vals)
rmses = evaluator.evaluate(ground_truth=test_vals, predict=test_pred, metric=ForecastMetric.RMSE)
smapes = evaluator.evaluate(ground_truth=test_vals, predict=test_pred, metric=ForecastMetric.sMAPE)
# Log relevant info to the CSV
with open(csv, "a") as f:
f.write(f"{i},{df.columns[0]},{horizon},{retrain_type},{n_retrain},{rmses},{smapes}\n")
# generate comparison plot
if visualize:
name = train_vals.names[0]
train_time_stamps = train_vals.univariates[name].time_stamps
fig_dir = os.path.join(results_dir, dataset_name + "_figs")
os.makedirs(fig_dir, exist_ok=True)
fig_dataset_dir = os.path.join(fig_dir, df.columns[0])
os.makedirs(fig_dataset_dir, exist_ok=True)
if train_result[0] is not None:
train_pred = train_result[0]
else:
train_pred = TimeSeries({name: UnivariateTimeSeries(train_time_stamps, None)})
fig_name = dirname + "_" + retrain_type + str(n_retrain) + "_" + "horizon" + str(int(horizon)) + ".png"
plot_unrolled_compare(
train_vals,
test_vals,
train_pred,
test_pred,
os.path.join(fig_dataset_dir, fig_name),
dirname + f"(sMAPE={smapes:.4f})",
)
# Log relevant info to the logger
logger.debug(f"{dirname} {retrain_type} {n_retrain} sMAPE : {smapes:.4f}\n")
# Save full experimental config
if model is not None:
full_config = dict(
model_config=model.config.to_dict(),
evaluator_config=evaluator.config.to_dict(),
code_version_info=get_code_version_info(),
)
with open(config_fname, "w") as f:
json.dump(full_config, f, indent=2, sort_keys=True) | Trains all the model on the dataset, and evaluates its predictions for every horizon setting on every time series. |
246 | import argparse
from collections import OrderedDict
import glob
import json
import logging
import math
import os
import re
import sys
import git
from typing import Dict, List
import numpy as np
import pandas as pd
import tqdm
from merlion.evaluate.forecast import ForecastEvaluator, ForecastMetric, ForecastEvaluatorConfig
from merlion.models.ensemble.combine import CombinerBase, Mean, ModelSelector, MetricWeightedMean
from merlion.models.ensemble.forecast import ForecasterEnsembleConfig, ForecasterEnsemble
from merlion.models.factory import ModelFactory
from merlion.models.forecast.base import ForecasterBase
from merlion.transform.resample import TemporalResample, granularity_str_to_seconds
from merlion.utils import TimeSeries, UnivariateTimeSeries
from merlion.utils.resample import infer_granularity, to_pd_datetime
from ts_datasets.base import BaseDataset
from ts_datasets.forecast import *
import matplotlib.pyplot as plt
The provided code snippet includes necessary dependencies for implementing the `join_dfs` function. Write a Python function `def join_dfs(name2df: Dict[str, pd.DataFrame]) -> pd.DataFrame` to solve the following problem:
Joins multiple results dataframes into a single dataframe describing the results from all models.
Here is the function:
def join_dfs(name2df: Dict[str, pd.DataFrame]) -> pd.DataFrame:
"""
Joins multiple results dataframes into a single dataframe describing the
results from all models.
"""
full_df, lsuffix = None, ""
shared_cols = ["idx", "name", "horizon", "retrain_type", "n_retrain"]
for name, df in name2df.items():
df.columns = [c if c in shared_cols else f"{c}_{name}" for c in df.columns]
if full_df is None:
full_df = df
else:
full_df = full_df.merge(df, how="outer", left_on=shared_cols, right_on=shared_cols)
unique_cols = [c for c in full_df.columns if c not in shared_cols]
return full_df[shared_cols + unique_cols] | Joins multiple results dataframes into a single dataframe describing the results from all models. |
247 | import argparse
from collections import OrderedDict
import glob
import json
import logging
import math
import os
import re
import sys
import git
from typing import Dict, List
import numpy as np
import pandas as pd
import tqdm
from merlion.evaluate.forecast import ForecastEvaluator, ForecastMetric, ForecastEvaluatorConfig
from merlion.models.ensemble.combine import CombinerBase, Mean, ModelSelector, MetricWeightedMean
from merlion.models.ensemble.forecast import ForecasterEnsembleConfig, ForecasterEnsemble
from merlion.models.factory import ModelFactory
from merlion.models.forecast.base import ForecasterBase
from merlion.transform.resample import TemporalResample, granularity_str_to_seconds
from merlion.utils import TimeSeries, UnivariateTimeSeries
from merlion.utils.resample import infer_granularity, to_pd_datetime
from ts_datasets.base import BaseDataset
from ts_datasets.forecast import *
import matplotlib.pyplot as plt
def summarize_full_df(full_df: pd.DataFrame) -> pd.DataFrame:
# Get the names of all algorithms which have full results
algs = [col[len("sMAPE") :] for col in full_df.columns if col.startswith("sMAPE") and not full_df[col].isna().any()]
summary_df = pd.DataFrame({alg.lstrip("_"): [] for alg in algs})
# Compute pooled (per time series) mean/median sMAPE, RMSE
mean_smape, med_smape, mean_rmse, med_rmse = [[] for _ in range(4)]
for ts_name in np.unique(full_df.name):
ts = full_df[full_df.name == ts_name]
# append smape
smapes = ts[[f"sMAPE{alg}" for alg in algs]]
mean_smape.append(smapes.mean(axis=0).values)
med_smape.append(smapes.median(axis=0).values)
# append rmse
rmses = ts[[f"RMSE{alg}" for alg in algs]]
mean_rmse.append(rmses.mean(axis=0).values)
med_rmse.append(rmses.median(axis=0).values)
# Add mean/median loglifts to the summary dataframe
summary_df.loc["mean_sMAPE"] = np.mean(mean_smape, axis=0)
summary_df.loc["median_sMAPE"] = np.median(med_smape, axis=0)
summary_df.loc["mean_RMSE"] = np.mean(mean_rmse, axis=0)
summary_df.loc["median_RMSE"] = np.median(med_rmse, axis=0)
return summary_df | null |
248 | from enum import Enum
from functools import partial
from typing import List, Union, Tuple
import warnings
import numpy as np
import pandas as pd
from merlion.evaluate.base import EvaluatorBase, EvaluatorConfig
from merlion.models.forecast.base import ForecasterBase
from merlion.utils import TimeSeries, UnivariateTimeSeries
from merlion.utils.resample import to_offset
class ForecastScoreAccumulator:
"""
Accumulator which maintains summary statistics describing a forecasting
algorithm's performance. Can be used to compute many different forecasting metrics.
"""
def __init__(
self,
ground_truth: Union[UnivariateTimeSeries, TimeSeries],
predict: Union[UnivariateTimeSeries, TimeSeries],
insample: Union[UnivariateTimeSeries, TimeSeries] = None,
periodicity: int = 1,
ub: TimeSeries = None,
lb: TimeSeries = None,
target_seq_index: int = None,
):
"""
:param ground_truth: ground truth time series
:param predict: predicted truth time series
:param insample (optional): time series used for training model. This value is used for computing MSES, MSIS
:param periodicity (optional): periodicity. m=1 indicates the non-seasonal time series,
whereas m>1 indicates seasonal time series. This value is used for computing MSES, MSIS.
:param ub (optional): upper bound of 95% prediction interval. This value is used for computing MSIS
:param lb (optional): lower bound of 95% prediction interval. This value is used for computing MSIS
:param target_seq_index (optional): the index of the target sequence, for multivariate.
"""
ground_truth = TimeSeries.from_pd(ground_truth)
predict = TimeSeries.from_pd(predict)
insample = TimeSeries.from_pd(insample)
t0, tf = predict.t0, predict.tf
ground_truth = ground_truth.window(t0, tf, include_tf=True).align()
if target_seq_index is not None:
ground_truth = ground_truth.univariates[ground_truth.names[target_seq_index]].to_ts()
if insample is not None:
insample = insample.univariates[insample.names[target_seq_index]].to_ts()
else:
assert ground_truth.dim == 1 and (
insample is None or insample.dim == 1
), "Expected to receive either univariate ground truth time series or non-None target_seq_index"
self.ground_truth = ground_truth
self.predict = predict.align(reference=ground_truth.time_stamps)
self.insample = insample
self.periodicity = periodicity
self.ub = ub
self.lb = lb
self.target_seq_index = target_seq_index
def check_before_eval(self):
# Make sure time series is univariate
assert self.predict.dim == self.ground_truth.dim == 1
# Make sure the timestamps of preds and targets are identical
assert self.predict.time_stamps == self.ground_truth.time_stamps
def mae(self):
"""
Mean Absolute Error (MAE)
For ground truth time series :math:`y` and predicted time series :math:`\\hat{y}`
of length :math:`T`, it is computed as
.. math:: \\frac{1}{T}\\sum_{t=1}^T{(|y_t - \\hat{y}_t|)}.
"""
self.check_before_eval()
predict_values = self.predict.univariates[self.predict.names[0]].np_values
ground_truth_values = self.ground_truth.univariates[self.ground_truth.names[0]].np_values
return np.mean(np.abs(ground_truth_values - predict_values))
def marre(self):
"""
Mean Absolute Ranged Relative Error (MARRE)
For ground truth time series :math:`y` and predicted time series :math:`\\hat{y}`
of length :math:`T`, it is computed as
.. math:: 100 \\cdot \\frac{1}{T} \\sum_{t=1}^{T} {\\left| \\frac{y_t - \\hat{y}_t} {\\max_t{y_t} -
\\min_t{y_t}} \\right|}.
"""
self.check_before_eval()
predict_values = self.predict.univariates[self.predict.names[0]].np_values
ground_truth_values = self.ground_truth.univariates[self.ground_truth.names[0]].np_values
assert ground_truth_values.max() > ground_truth_values.min()
true_range = ground_truth_values.max() - ground_truth_values.min()
return 100.0 * np.mean(np.abs((ground_truth_values - predict_values) / true_range))
def rmse(self):
"""
Root Mean Squared Error (RMSE)
For ground truth time series :math:`y` and predicted time series :math:`\\hat{y}`
of length :math:`T`, it is computed as
.. math:: \\sqrt{\\frac{1}{T}\\sum_{t=1}^T{(y_t - \\hat{y}_t)^2}}.
"""
self.check_before_eval()
predict_values = self.predict.univariates[self.predict.names[0]].np_values
ground_truth_values = self.ground_truth.univariates[self.ground_truth.names[0]].np_values
return np.sqrt(np.mean((ground_truth_values - predict_values) ** 2))
def smape(self):
"""
symmetric Mean Absolute Percentage Error (sMAPE). For ground truth time series :math:`y`
and predicted time series :math:`\\hat{y}` of length :math:`T`, it is computed as
.. math::
200 \\cdot \\frac{1}{T}
\\sum_{t=1}^{T}{\\frac{\\left| y_t - \\hat{y}_t \\right|}{\\left| y_t \\right|
+ \\left| \\hat{y}_t \\right|}}.
"""
self.check_before_eval()
predict_values = self.predict.univariates[self.predict.names[0]].np_values
ground_truth_values = self.ground_truth.univariates[self.ground_truth.names[0]].np_values
errors = np.abs(ground_truth_values - predict_values)
scale = np.abs(ground_truth_values) + np.abs(predict_values)
# Make sure the divisor is not close to zero at each timestamp
if (scale < 1e-8).any():
warnings.warn("Some values very close to 0, sMAPE might not be estimated accurately.")
return np.mean(200.0 * errors / (scale + 1e-8))
def rmspe(self):
"""
Root Mean Squared Percent Error (RMSPE)
For ground truth time series :math:`y` and predicted time series :math:`\\hat{y}`
of length :math:`T`, it is computed as
.. math:: 100 \\cdot \\sqrt{\\frac{1}{T}\\sum_{t=1}^T\\frac{(y_t - \\hat{y}_t)}{y_t}^2}.
"""
self.check_before_eval()
predict_values = self.predict.univariates[self.predict.names[0]].np_values
ground_truth_values = self.ground_truth.univariates[self.ground_truth.names[0]].np_values
if (ground_truth_values < 1e-8).any():
warnings.warn("Some values very close to 0, RMSPE might not be estimated accurately.")
errors = ground_truth_values - predict_values
return 100 * np.sqrt(np.mean(np.square(errors / ground_truth_values)))
def mase(self):
"""
Mean Absolute Scaled Error (MASE)
For ground truth time series :math:`y` and predicted time series :math:`\\hat{y}`
of length :math:`T`. In sample time series :math:`\\hat{x}` of length :math:`N`
and periodicity :math:`m` it is computed as
.. math::
\\frac{1}{T}\\cdot\\frac{\\sum_{t=1}^{T}\\left| y_t
- \\hat{y}_t \\right|}{\\frac{1}{N-m}\\sum_{t=m+1}^{N}\\left| x_t - x_{t-m} \\right|}.
"""
self.check_before_eval()
assert self.insample.dim == 1
insample_values = self.insample.univariates[self.insample.names[0]].np_values
predict_values = self.predict.univariates[self.predict.names[0]].np_values
ground_truth_values = self.ground_truth.univariates[self.ground_truth.names[0]].np_values
errors = np.abs(ground_truth_values - predict_values)
scale = np.mean(np.abs(insample_values[self.periodicity :] - insample_values[: -self.periodicity]))
# Make sure the divisor is not close to zero at each timestamp
if (scale < 1e-8).any():
warnings.warn("Some values very close to 0, MASE might not be estimated accurately.")
return np.mean(errors / (scale + 1e-8))
def msis(self):
"""
Mean Scaled Interval Score (MSIS)
This metric evaluates the quality of 95% prediction intervals.
For ground truth time series :math:`y` and predicted time series :math:`\\hat{y}`
of length :math:`T`, the lower and upper bounds of the prediction intervals
:math:`L` and :math:`U`. Given in sample time series :math:`\\hat{x}` of length :math:`N`
and periodicity :math:`m`, it is computed as
.. math::
\\frac{1}{T}\\cdot\\frac{\\sum_{t=1}^{T} (U_t - L_t) + 100 \\cdot (L_t - y_t)[y_t<L_t]
+ 100\\cdot(y_t - U_t)[y_t > U_t]}{\\frac{1}{N-m}\\sum_{t=m+1}^{N}\\left| x_t - x_{t-m} \\right|}.
"""
self.check_before_eval()
assert self.insample.dim == 1
assert self.lb is not None and self.ub is not None
insample_values = self.insample.univariates[self.insample.names[0]].np_values
lb_values = self.lb.univariates[self.lb.names[0]].np_values
ub_values = self.ub.univariates[self.ub.names[0]].np_values
ground_truth_values = self.ground_truth.univariates[self.ground_truth.names[0]].np_values
errors = (
np.sum(ub_values - lb_values)
+ 100 * np.sum((lb_values - ground_truth_values)[lb_values > ground_truth_values])
+ 100 * np.sum((ground_truth_values - ub_values)[ground_truth_values > ub_values])
)
scale = np.mean(np.abs(insample_values[self.periodicity :] - insample_values[: -self.periodicity]))
# Make sure the divisor is not close to zero at each timestamp
if (scale < 1e-8).any():
warnings.warn("Some values very close to 0, MSIS might not be estimated accurately.")
return errors / (scale + 1e-8) / len(ground_truth_values)
def accumulate_forecast_score(
ground_truth: TimeSeries,
predict: TimeSeries,
insample: TimeSeries = None,
periodicity=1,
ub: TimeSeries = None,
lb: TimeSeries = None,
metric=None,
target_seq_index=None,
) -> Union[ForecastScoreAccumulator, float]:
acc = ForecastScoreAccumulator(
ground_truth=ground_truth,
predict=predict,
insample=insample,
periodicity=periodicity,
ub=ub,
lb=lb,
target_seq_index=target_seq_index,
)
return acc if metric is None else metric(acc) | null |
249 | import bisect
import logging
import numpy as np
from merlion.evaluate.anomaly import TSADMetric
from merlion.post_process.base import PostRuleBase
from merlion.utils import UnivariateTimeSeries, TimeSeries
The provided code snippet includes necessary dependencies for implementing the `get_adaptive_thres` function. Write a Python function `def get_adaptive_thres(x, hist_gap_thres=None, bin_sz=None)` to solve the following problem:
Look for gaps in the histogram of anomaly scores (i.e. histogram bins with zero items inside them). Set the detection threshold to the avg bin size s.t. the 2 bins have a gap of hist_gap_thres or more
Here is the function:
def get_adaptive_thres(x, hist_gap_thres=None, bin_sz=None):
"""
Look for gaps in the histogram of anomaly scores (i.e. histogram bins with
zero items inside them). Set the detection threshold to the avg bin size s.t.
the 2 bins have a gap of hist_gap_thres or more
"""
nbins = x.shape[0] // bin_sz # FIXME
hist, bins = np.histogram(x, bins=nbins)
idx_list = np.where((hist > 0)[1:] != (hist > 0)[:-1])[0] + 1
for i in list(range((idx_list.shape[0]) - 1)):
if bins[idx_list[i + 1]] / bins[idx_list[i]] > hist_gap_thres:
thres = (bins[idx_list[i + 1]] + bins[idx_list[i]]) / 2
return x > thres, thres
return np.zeros((x.shape[0],)), np.inf | Look for gaps in the histogram of anomaly scores (i.e. histogram bins with zero items inside them). Set the detection threshold to the avg bin size s.t. the 2 bins have a gap of hist_gap_thres or more |
250 | import logging
import traceback
from typing import List, Union
import numpy as np
import pandas as pd
from merlion.models.factory import instantiate_or_copy_model, ModelFactory
from merlion.models.anomaly.base import DetectorBase
from merlion.models.forecast.base import ForecasterBase
from merlion.spark.dataset import TSID_COL_NAME
from merlion.utils import TimeSeries, to_pd_datetime
logger = logging.getLogger(__name__)
def instantiate_or_copy_model(model: Union[dict, ModelBase]):
if isinstance(model, ModelBase):
return copy.deepcopy(model)
elif isinstance(model, dict):
try:
return ModelFactory.create(**model)
except Exception as e:
logger.error(f"Invalid `dict` specifying a model config.\n\nGot {model}")
raise e
else:
raise TypeError(f"Expected model to be a `dict` or `ModelBase`. Got {model}")
class ForecasterBase(ModelBase):
"""
Base class for a forecaster model.
.. note::
If your model depends on an evenly spaced time series, make sure to
1. Call `ForecasterBase.train_pre_process` in `ForecasterBase.train`
2. Call `ForecasterBase.resample_time_stamps` at the start of
`ForecasterBase.forecast` to get a set of resampled time stamps, and
call ``time_series.align(reference=time_stamps)`` to align the forecast
with the original time stamps.
"""
config_class = ForecasterConfig
target_name = None
"""
The name of the target univariate to forecast.
"""
def __init__(self, config: ForecasterConfig):
super().__init__(config)
self.target_name = None
self.exog_dim = None
def max_forecast_steps(self):
return self.config.max_forecast_steps
def target_seq_index(self) -> int:
"""
:return: the index of the univariate (amongst all univariates in a
general multivariate time series) whose value we would like to forecast.
"""
return self.config.target_seq_index
def invert_transform(self):
"""
:return: Whether to automatically invert the ``transform`` before returning a forecast.
"""
return self.config.invert_transform and not self.transform.identity_inversion
def require_univariate(self) -> bool:
"""
All forecasters can work on multivariate data, since they only forecast a single target univariate.
"""
return False
def support_multivariate_output(self) -> bool:
"""
Indicating whether the forecasting model can forecast multivariate output.
"""
return False
def resample_time_stamps(self, time_stamps: Union[int, List[int]], time_series_prev: TimeSeries = None):
assert self.timedelta is not None and self.last_train_time is not None, (
"train() must be called before you can call forecast(). "
"If you have already called train(), make sure it sets "
"self.timedelta and self.last_train_time appropriately."
)
# Determine timedelta & initial time of forecast
dt, offset = self.timedelta, self.timedelta_offset
if time_series_prev is not None and not time_series_prev.is_empty():
t0 = to_pd_datetime(time_series_prev.tf)
else:
t0 = self.last_train_time
# Handle the case where time_stamps is an integer
if isinstance(time_stamps, (int, float)):
n = int(time_stamps)
assert self.max_forecast_steps is None or n <= self.max_forecast_steps
resampled = pd.date_range(start=t0, periods=n + 1, freq=dt) + offset
resampled = resampled[1:] if resampled[0] == t0 else resampled[:-1]
time_stamps = to_timestamp(resampled)
elif not self.require_even_sampling:
resampled = to_pd_datetime(time_stamps)
# Handle the cases where we don't have a max_forecast_steps
elif self.max_forecast_steps is None:
tf = to_pd_datetime(time_stamps[-1])
resampled = pd.date_range(start=t0, end=tf + 2 * dt, freq=dt) + offset
if resampled[0] == t0:
resampled = resampled[1:]
if len(resampled) > 1 and resampled[-2] >= tf:
resampled = resampled[:-1]
# Handle the case where we do have a max_forecast_steps
else:
resampled = pd.date_range(start=t0, periods=self.max_forecast_steps + 1, freq=dt) + offset
resampled = resampled[1:] if resampled[0] == t0 else resampled[:-1]
resampled = resampled[: 1 + sum(resampled < to_pd_datetime(time_stamps[-1]))]
tf = resampled[-1]
assert to_pd_datetime(time_stamps[0]) >= t0 and to_pd_datetime(time_stamps[-1]) <= tf, (
f"Expected `time_stamps` to be between {t0} and {tf}, but `time_stamps` ranges "
f"from {to_pd_datetime(time_stamps[0])} to {to_pd_datetime(time_stamps[-1])}"
)
return to_timestamp(resampled).tolist()
def train_pre_process(
self, train_data: TimeSeries, exog_data: TimeSeries = None, return_exog=None
) -> Union[TimeSeries, Tuple[TimeSeries, Union[TimeSeries, None]]]:
train_data = super().train_pre_process(train_data)
if self.dim == 1:
self.config.target_seq_index = 0
elif self.target_seq_index is None and not self.support_multivariate_output:
raise RuntimeError(
f"Attempting to use a forecaster that does not support multivariate outputs "
f"on a {train_data.dim}-variable "
f"time series, but didn't specify a `target_seq_index` "
f"indicating which univariate is the target."
)
assert self.support_multivariate_output or (0 <= self.target_seq_index < train_data.dim), (
f"Expected `support_multivariate_output = True`,"
f"or `target_seq_index` to be between 0 and {train_data.dim}"
f"(the dimension of the transformed data), but got {self.target_seq_index} "
)
if self.support_multivariate_output and self.target_seq_index is None:
self.target_name = str(train_data.names)
else:
self.target_name = train_data.names[self.target_seq_index]
# Handle exogenous data
if return_exog is None:
return_exog = exog_data is not None
if not self.supports_exog:
if exog_data is not None:
exog_data = None
logger.warning(f"Exogenous regressors are not supported for model {type(self).__name__}")
if exog_data is not None:
self.exog_dim = exog_data.dim
self.config.exog_transform.train(exog_data)
else:
self.exog_dim = None
if return_exog and exog_data is not None:
exog_data, _ = self.transform_exog_data(exog_data=exog_data, time_stamps=train_data.time_stamps)
return (train_data, exog_data) if return_exog else train_data
def train(
self, train_data: TimeSeries, train_config=None, exog_data: TimeSeries = None
) -> Tuple[TimeSeries, Optional[TimeSeries]]:
"""
Trains the forecaster on the input time series.
:param train_data: a `TimeSeries` of metric values to train the model.
:param train_config: Additional training configs, if needed. Only required for some models.
:param exog_data: A time series of exogenous variables, sampled at the same time stamps as ``train_data``.
Exogenous variables are known a priori, and they are independent of the variable being forecasted.
Only supported for models which inherit from `ForecasterExogBase`.
:return: the model's prediction on ``train_data``, in the same format as
if you called `ForecasterBase.forecast` on the time stamps of ``train_data``
"""
if train_config is None:
train_config = copy.deepcopy(self._default_train_config)
train_data, exog_data = self.train_pre_process(train_data, exog_data=exog_data, return_exog=True)
if self._pandas_train:
train_data = train_data.to_pd()
exog_data = None if exog_data is None else exog_data.to_pd()
if exog_data is None:
train_result = self._train(train_data=train_data, train_config=train_config)
else:
train_result = self._train_with_exog(train_data=train_data, train_config=train_config, exog_data=exog_data)
return self.train_post_process(train_result)
def train_post_process(
self, train_result: Tuple[Union[TimeSeries, pd.DataFrame], Optional[Union[TimeSeries, pd.DataFrame]]]
) -> Tuple[TimeSeries, TimeSeries]:
"""
Converts the train result (forecast & stderr for training data) into TimeSeries objects, and inverts the
model's transform if desired.
"""
return self._process_forecast(*train_result)
def transform_exog_data(
self,
exog_data: TimeSeries,
time_stamps: Union[List[int], pd.DatetimeIndex],
time_series_prev: TimeSeries = None,
) -> Union[Tuple[TimeSeries, TimeSeries], Tuple[TimeSeries, None], Tuple[None, None]]:
if exog_data is not None:
logger.warning(f"Exogenous regressors are not supported for model {type(self).__name__}")
return None, None
def _train(self, train_data: pd.DataFrame, train_config=None) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
raise NotImplementedError
def _train_with_exog(
self, train_data: pd.DataFrame, train_config=None, exog_data: pd.DataFrame = None
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
return self._train(train_data=train_data, train_config=train_config)
def forecast(
self,
time_stamps: Union[int, List[int]],
time_series_prev: TimeSeries = None,
exog_data: TimeSeries = None,
return_iqr: bool = False,
return_prev: bool = False,
) -> Union[Tuple[TimeSeries, Optional[TimeSeries]], Tuple[TimeSeries, TimeSeries, TimeSeries]]:
"""
Returns the model's forecast on the timestamps given. If ``self.transform`` is specified in the config, the
forecast is a forecast of transformed values by default. To invert the transform and forecast the actual
values of the time series, specify ``invert_transform = True`` when specifying the config.
:param time_stamps: Either a ``list`` of timestamps we wish to forecast for, or the number of steps (``int``)
we wish to forecast for.
:param time_series_prev: a time series immediately preceding ``time_series``. If given, we use it to initialize
the forecaster's state. Otherwise, we assume that ``time_series`` immediately follows the training data.
:param exog_data: A time series of exogenous variables. Exogenous variables are known a priori, and they are
independent of the variable being forecasted. ``exog_data`` must include data for all of ``time_stamps``;
if ``time_series_prev`` is given, it must include data for all of ``time_series_prev.time_stamps`` as well.
Optional. Only supported for models which inherit from `ForecasterExogBase`.
:param return_iqr: whether to return the inter-quartile range for the forecast.
Only supported for models which return error bars.
:param return_prev: whether to return the forecast for ``time_series_prev`` (and its stderr or IQR if relevant),
in addition to the forecast for ``time_stamps``. Only used if ``time_series_prev`` is provided.
:return: ``(forecast, stderr)`` if ``return_iqr`` is false, ``(forecast, lb, ub)`` otherwise.
- ``forecast``: the forecast for the timestamps given
- ``stderr``: the standard error of each forecast value. May be ``None``.
- ``lb``: 25th percentile of forecast values for each timestamp
- ``ub``: 75th percentile of forecast values for each timestamp
"""
# Determine the time stamps to forecast for, and resample them if needed
orig_t = None if isinstance(time_stamps, (int, float)) else time_stamps
time_stamps = self.resample_time_stamps(time_stamps, time_series_prev)
if return_prev and time_series_prev is not None:
if orig_t is None:
orig_t = time_series_prev.time_stamps + time_stamps
else:
orig_t = time_series_prev.time_stamps + to_timestamp(orig_t).tolist()
# Transform time_series_prev if it is given
old_inversion_state = self.transform.inversion_state
if time_series_prev is None:
time_series_prev_df = None
else:
time_series_prev = self.transform(time_series_prev)
assert time_series_prev.dim == self.dim, (
f"time_series_prev has dimension of {time_series_prev.dim} that is different from "
f"training data dimension of {self.dim} for the model"
)
time_series_prev_df = time_series_prev.to_pd()
# Make the prediction
exog_data, exog_data_prev = self.transform_exog_data(
exog_data, time_stamps=time_stamps, time_series_prev=time_series_prev
)
if exog_data is None:
forecast, err = self._forecast(
time_stamps=time_stamps, time_series_prev=time_series_prev_df, return_prev=return_prev
)
else:
forecast, err = self._forecast_with_exog(
time_stamps=time_stamps,
time_series_prev=time_series_prev_df,
return_prev=return_prev,
exog_data=exog_data.to_pd(),
exog_data_prev=None if exog_data_prev is None else exog_data_prev.to_pd(),
)
# Format the return values and reset the transform's inversion state
if self.invert_transform and time_series_prev is None:
time_series_prev = self.transform(self.train_data)
if time_series_prev is not None and self.target_seq_index is not None:
time_series_prev = pd.DataFrame(time_series_prev.univariates[time_series_prev.names[self.target_seq_index]])
ret = self._process_forecast(forecast, err, time_series_prev, return_prev=return_prev, return_iqr=return_iqr)
self.transform.inversion_state = old_inversion_state
return tuple(None if x is None else x.align(reference=orig_t) for x in ret)
def _process_forecast(self, forecast, err, time_series_prev=None, return_prev=False, return_iqr=False):
forecast = forecast.to_pd() if isinstance(forecast, TimeSeries) else forecast
if return_prev and time_series_prev is not None:
forecast = pd.concat((time_series_prev, forecast))
# Obtain negative & positive error bars which are appropriately padded
if err is not None:
err = (err,) if not isinstance(err, tuple) else err
assert isinstance(err, tuple) and len(err) in (1, 2)
assert all(isinstance(e, (pd.DataFrame, TimeSeries)) for e in err)
new_err = []
for e in err:
e = e.to_pd() if isinstance(e, TimeSeries) else e
n, d = len(forecast) - len(e), e.shape[1]
if n > 0:
e = pd.concat((pd.DataFrame(np.zeros((n, d)), index=forecast.index[:n], columns=e.columns), e))
e.columns = [f"{c}_err" for c in forecast.columns]
new_err.append(e.abs())
e_neg, e_pos = new_err if len(new_err) == 2 else (new_err[0], new_err[0])
else:
e_neg = e_pos = None
# Compute upper/lower bounds for the (potentially inverted) forecast.
# Only do this if returning the IQR or inverting the transform.
if (return_iqr or self.invert_transform) and e_neg is not None and e_pos is not None:
lb = TimeSeries.from_pd((forecast + e_neg.values * (norm.ppf(0.25) if return_iqr else -1)))
ub = TimeSeries.from_pd((forecast + e_pos.values * (norm.ppf(0.75) if return_iqr else 1)))
if self.invert_transform:
lb = self.transform.invert(lb, retain_inversion_state=True)
ub = self.transform.invert(ub, retain_inversion_state=True)
else:
lb = ub = None
# Convert the forecast to TimeSeries and invert the transform on it if desired
forecast = TimeSeries.from_pd(forecast)
if self.invert_transform:
forecast = self.transform.invert(forecast, retain_inversion_state=True)
# Return the IQR if desired
if return_iqr:
if lb is None or ub is None:
logger.warning("Model returned err = None, so returning IQR = (None, None)")
else:
lb, ub = lb.rename(lambda c: f"{c}_lower"), ub.rename(lambda c: f"{c}_upper")
return forecast, lb, ub
# Otherwise, either compute the stderr from the upper/lower bounds (if relevant), or just use the error
if lb is not None and ub is not None:
err = TimeSeries.from_pd((ub.to_pd() - lb.to_pd().values).rename(columns=lambda c: f"{c}_err").abs() / 2)
elif e_neg is not None and e_pos is not None:
err = TimeSeries.from_pd(e_pos if e_neg is e_pos else (e_neg + e_pos) / 2)
else:
err = None
return forecast, err
def _forecast(
self, time_stamps: List[int], time_series_prev: pd.DataFrame = None, return_prev=False
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
raise NotImplementedError
def _forecast_with_exog(
self,
time_stamps: List[int],
time_series_prev: pd.DataFrame = None,
return_prev=False,
exog_data: pd.DataFrame = None,
exog_data_prev: pd.DataFrame = None,
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
return self._forecast(time_stamps=time_stamps, time_series_prev=time_series_prev, return_prev=return_prev)
def batch_forecast(
self,
time_stamps_list: List[List[int]],
time_series_prev_list: List[TimeSeries],
return_iqr: bool = False,
return_prev: bool = False,
) -> Tuple[
Union[
Tuple[List[TimeSeries], List[Optional[TimeSeries]]],
Tuple[List[TimeSeries], List[TimeSeries], List[TimeSeries]],
]
]:
"""
Returns the model's forecast on a batch of timestamps given.
:param time_stamps_list: a list of lists of timestamps we wish to forecast for
:param time_series_prev_list: a list of TimeSeries immediately preceding the time stamps in time_stamps_list
:param return_iqr: whether to return the inter-quartile range for the forecast.
Only supported by models which can return error bars.
:param return_prev: whether to return the forecast for ``time_series_prev`` (and its stderr or IQR if relevant),
in addition to the forecast for ``time_stamps``. Only used if ``time_series_prev`` is provided.
:return: ``(forecast, forecast_stderr)`` if ``return_iqr`` is false,
``(forecast, forecast_lb, forecast_ub)`` otherwise.
- ``forecast``: the forecast for the timestamps given
- ``forecast_stderr``: the standard error of each forecast value. May be ``None``.
- ``forecast_lb``: 25th percentile of forecast values for each timestamp
- ``forecast_ub``: 75th percentile of forecast values for each timestamp
"""
out_list = []
if time_series_prev_list is None:
time_series_prev_list = [None for _ in range(len(time_stamps_list))]
for time_stamps, time_series_prev in zip(time_stamps_list, time_series_prev_list):
out = self.forecast(
time_stamps=time_stamps,
time_series_prev=time_series_prev,
return_iqr=return_iqr,
return_prev=return_prev,
)
out_list.append(out)
return tuple(zip(*out_list))
def get_figure(
self,
*,
time_series: TimeSeries = None,
time_stamps: List[int] = None,
time_series_prev: TimeSeries = None,
exog_data: TimeSeries = None,
plot_forecast_uncertainty=False,
plot_time_series_prev=False,
) -> Figure:
"""
:param time_series: the time series over whose timestamps we wish to make a forecast. Exactly one of
``time_series`` or ``time_stamps`` should be provided.
:param time_stamps: Either a ``list`` of timestamps we wish to forecast for, or the number of steps (``int``)
we wish to forecast for. Exactly one of ``time_series`` or ``time_stamps`` should be provided.
:param time_series_prev: a time series immediately preceding ``time_series``. If given, we use it to initialize
the forecaster's state. Otherwise, we assume that ``time_series`` immediately follows the training data.
:param exog_data: A time series of exogenous variables. Exogenous variables are known a priori, and they are
independent of the variable being forecasted. ``exog_data`` must include data for all of ``time_stamps``;
if ``time_series_prev`` is given, it must include data for all of ``time_series_prev.time_stamps`` as well.
Optional. Only supported for models which inherit from `ForecasterExogBase`.
:param plot_forecast_uncertainty: whether to plot uncertainty estimates (the inter-quartile range) for forecast
values. Not supported for all models.
:param plot_time_series_prev: whether to plot ``time_series_prev`` (and the model's fit for it).
Only used if ``time_series_prev`` is given.
:return: a `Figure` of the model's forecast.
"""
assert not (
time_series is None and time_stamps is None
), "Must provide at least one of time_series or time_stamps"
if time_stamps is None:
if self.invert_transform:
time_stamps = time_series.time_stamps
y = time_series.univariates[time_series.names[self.target_seq_index]]
else:
transformed_ts = self.transform(time_series)
time_stamps = transformed_ts.time_stamps
y = transformed_ts.univariates[transformed_ts.names[self.target_seq_index]]
else:
y = None
# Get forecast + bounds if plotting uncertainty
if plot_forecast_uncertainty:
yhat, lb, ub = self.forecast(
time_stamps, time_series_prev, exog_data=exog_data, return_iqr=True, return_prev=plot_time_series_prev
)
yhat, lb, ub = [None if x is None else x.univariates[x.names[0]] for x in [yhat, lb, ub]]
# Just get the forecast otherwise
else:
lb, ub = None, None
yhat, err = self.forecast(
time_stamps, time_series_prev, exog_data=exog_data, return_iqr=False, return_prev=plot_time_series_prev
)
yhat = yhat.univariates[yhat.names[0]]
# Set up all the parameters needed to make a figure
if time_series_prev is not None and plot_time_series_prev:
if not self.invert_transform:
time_series_prev = self.transform(time_series_prev)
time_series_prev = time_series_prev.univariates[time_series_prev.names[self.target_seq_index]]
n_prev = len(time_series_prev)
yhat_prev, yhat = yhat[:n_prev], yhat[n_prev:]
if lb is not None and ub is not None:
lb_prev, lb = lb[:n_prev], lb[n_prev:]
ub_prev, ub = ub[:n_prev], ub[n_prev:]
else:
lb_prev = ub_prev = None
else:
time_series_prev = None
yhat_prev = lb_prev = ub_prev = None
# Create the figure
return Figure(
y=y,
yhat=yhat,
yhat_lb=lb,
yhat_ub=ub,
y_prev=time_series_prev,
yhat_prev=yhat_prev,
yhat_prev_lb=lb_prev,
yhat_prev_ub=ub_prev,
)
def plot_forecast(
self,
*,
time_series: TimeSeries = None,
time_stamps: List[int] = None,
time_series_prev: TimeSeries = None,
exog_data: TimeSeries = None,
plot_forecast_uncertainty=False,
plot_time_series_prev=False,
figsize=(1000, 600),
ax=None,
):
"""
Plots the forecast for the time series in matplotlib, optionally also
plotting the uncertainty of the forecast, as well as the past values
(both true and predicted) of the time series.
:param time_series: the time series over whose timestamps we wish to make a forecast. Exactly one of
``time_series`` or ``time_stamps`` should be provided.
:param time_stamps: Either a ``list`` of timestamps we wish to forecast for, or the number of steps (``int``)
we wish to forecast for. Exactly one of ``time_series`` or ``time_stamps`` should be provided.
:param time_series_prev: a time series immediately preceding ``time_series``. If given, we use it to initialize
the forecaster's state. Otherwise, we assume that ``time_series`` immediately follows the training data.
:param exog_data: A time series of exogenous variables. Exogenous variables are known a priori, and they are
independent of the variable being forecasted. ``exog_data`` must include data for all of ``time_stamps``;
if ``time_series_prev`` is given, it must include data for all of ``time_series_prev.time_stamps`` as well.
Optional. Only supported for models which inherit from `ForecasterExogBase`.
:param plot_forecast_uncertainty: whether to plot uncertainty estimates (the inter-quartile range) for forecast
values. Not supported for all models.
:param plot_time_series_prev: whether to plot ``time_series_prev`` (and the model's fit for it). Only used if
``time_series_prev`` is given.
:param figsize: figure size in pixels
:param ax: matplotlib axis to add this plot to
:return: (fig, ax): matplotlib figure & axes the figure was plotted on
"""
fig = self.get_figure(
time_series=time_series,
time_stamps=time_stamps,
time_series_prev=time_series_prev,
exog_data=exog_data,
plot_forecast_uncertainty=plot_forecast_uncertainty,
plot_time_series_prev=plot_time_series_prev,
)
title = f"{type(self).__name__}: Forecast of {self.target_name}"
return fig.plot(title=title, metric_name=self.target_name, figsize=figsize, ax=ax)
def plot_forecast_plotly(
self,
*,
time_series: TimeSeries = None,
time_stamps: List[int] = None,
time_series_prev: TimeSeries = None,
exog_data: TimeSeries = None,
plot_forecast_uncertainty=False,
plot_time_series_prev=False,
figsize=(1000, 600),
):
"""
Plots the forecast for the time series in plotly, optionally also
plotting the uncertainty of the forecast, as well as the past values
(both true and predicted) of the time series.
:param time_series: the time series over whose timestamps we wish to make a forecast. Exactly one of
``time_series`` or ``time_stamps`` should be provided.
:param time_stamps: Either a ``list`` of timestamps we wish to forecast for, or the number of steps (``int``)
we wish to forecast for. Exactly one of ``time_series`` or ``time_stamps`` should be provided.
:param time_series_prev: a time series immediately preceding ``time_series``. If given, we use it to initialize
the forecaster's state. Otherwise, we assume that ``time_series`` immediately follows the training data.
:param exog_data: A time series of exogenous variables. Exogenous variables are known a priori, and they are
independent of the variable being forecasted. ``exog_data`` must include data for all of ``time_stamps``;
if ``time_series_prev`` is given, it must include data for all of ``time_series_prev.time_stamps`` as well.
Optional. Only supported for models which inherit from `ForecasterExogBase`.
:param plot_forecast_uncertainty: whether to plot uncertainty estimates (the
inter-quartile range) for forecast values. Not supported for all
models.
:param plot_time_series_prev: whether to plot ``time_series_prev`` (and
the model's fit for it). Only used if ``time_series_prev`` is given.
:param figsize: figure size in pixels
"""
fig = self.get_figure(
time_series=time_series,
time_stamps=time_stamps,
time_series_prev=time_series_prev,
exog_data=exog_data,
plot_forecast_uncertainty=plot_forecast_uncertainty,
plot_time_series_prev=plot_time_series_prev,
)
title = f"{type(self).__name__}: Forecast of {self.target_name}"
return fig.plot_plotly(title=title, metric_name=self.target_name, figsize=figsize)
TSID_COL_NAME = "__ts_id"
The provided code snippet includes necessary dependencies for implementing the `forecast` function. Write a Python function `def forecast( pdf: pd.DataFrame, index_cols: List[str], time_col: str, target_col: str, time_stamps: Union[List[int], List[str]], model: Union[ForecasterBase, dict], predict_on_train: bool = False, agg_dict: dict = None, ) -> pd.DataFrame` to solve the following problem:
Pyspark pandas UDF for performing forecasting. Should be called on a pyspark dataframe grouped by time series ID, i.e. by ``index_cols``. :param pdf: The ``pandas.DataFrame`` containing the training data. Should be a single time series. :param index_cols: The list of column names used to index all the time series in the dataset. Not used for modeling. :param time_col: The name of the column containing the timestamps. :param target_col: The name of the column whose value we wish to forecast. :param time_stamps: The timestamps at which we would like to obtain a forecast. :param model: The model (or model ``dict``) we are using to obtain a forecast. :param predict_on_train: Whether to return the model's prediction on the training data. :param agg_dict: A dictionary used to specify how different data columns should be aggregated. If a non-target data column is not in agg_dict, we do not model it for aggregated time series. :return: A ``pandas.DataFrame`` with the forecast & its standard error (NaN if the model doesn't have error bars). Columns are ``[*index_cols, time_col, target_col, target_col + \"_err\"]``.
Here is the function:
def forecast(
pdf: pd.DataFrame,
index_cols: List[str],
time_col: str,
target_col: str,
time_stamps: Union[List[int], List[str]],
model: Union[ForecasterBase, dict],
predict_on_train: bool = False,
agg_dict: dict = None,
) -> pd.DataFrame:
"""
Pyspark pandas UDF for performing forecasting.
Should be called on a pyspark dataframe grouped by time series ID, i.e. by ``index_cols``.
:param pdf: The ``pandas.DataFrame`` containing the training data. Should be a single time series.
:param index_cols: The list of column names used to index all the time series in the dataset. Not used for modeling.
:param time_col: The name of the column containing the timestamps.
:param target_col: The name of the column whose value we wish to forecast.
:param time_stamps: The timestamps at which we would like to obtain a forecast.
:param model: The model (or model ``dict``) we are using to obtain a forecast.
:param predict_on_train: Whether to return the model's prediction on the training data.
:param agg_dict: A dictionary used to specify how different data columns should be aggregated. If a non-target
data column is not in agg_dict, we do not model it for aggregated time series.
:return: A ``pandas.DataFrame`` with the forecast & its standard error (NaN if the model doesn't have error bars).
Columns are ``[*index_cols, time_col, target_col, target_col + \"_err\"]``.
"""
# If the time series has been aggregated, drop non-target columns which are not explicitly specified in agg_dict.
if TSID_COL_NAME not in index_cols and TSID_COL_NAME in pdf.columns:
index_cols = index_cols + [TSID_COL_NAME]
if (pdf.loc[:, index_cols] == "__aggregated__").any().any():
data_cols = [c for c in pdf.columns if c not in index_cols + [time_col]]
pdf = pdf.drop(columns=[c for c in data_cols if c != target_col and c not in agg_dict])
# Sort the dataframe by time & turn it into a Merlion time series
pdf = pdf.sort_values(by=time_col)
ts = TimeSeries.from_pd(pdf.drop(columns=index_cols).set_index(time_col))
# Create model
model = instantiate_or_copy_model(model or {"name": "DefaultForecaster"})
if not isinstance(model, ForecasterBase):
raise TypeError(f"Expected `model` to be an instance of ForecasterBase, but got {model}.")
# Train model & run forecast
try:
train_pred, train_err = model.train(ts)
pred, err = model.forecast(time_stamps=time_stamps)
except Exception:
row0 = pdf.iloc[0]
idx = ", ".join(f"{k} = {row0[k]}" for k in index_cols)
logger.warning(
f"Model {type(model).__name__} threw an exception on ({idx}). Returning the mean training value as a "
f"placeholder forecast. {traceback.format_exc()}"
)
meanval = pdf.loc[:, target_col].mean().item()
train_err, err = None, None
train_pred = TimeSeries.from_pd(pd.DataFrame(meanval, index=pdf[time_col], columns=[target_col]))
pred = TimeSeries.from_pd(pd.DataFrame(meanval, index=to_pd_datetime(time_stamps), columns=[target_col]))
# Concatenate train & test results if predict_on_train is True
if predict_on_train:
if train_pred is not None and pred is not None:
pred = train_pred + pred
if train_err is not None and err is not None:
err = train_err + err
# Combine forecast & stderr into a single dataframe
pred = pred.to_pd()
dtype = pred.dtypes[0]
err = pd.DataFrame(np.full(len(pred), np.nan), index=pred.index, dtype=dtype) if err is None else err.to_pd()
pred = pd.DataFrame(pred.iloc[:, 0].rename(target_col))
err = pd.DataFrame(err.iloc[:, 0].rename(f"{target_col}_err"))
pred_pdf = pd.concat([pred, err], axis=1)
# Turn the time index into a regular column, and add the index columns back to the prediction
pred_pdf.index.name = time_col
pred_pdf.reset_index(inplace=True)
index_pdf = pd.concat([pdf[index_cols].iloc[:1]] * len(pred_pdf), ignore_index=True)
return pd.concat((index_pdf, pred_pdf), axis=1) | Pyspark pandas UDF for performing forecasting. Should be called on a pyspark dataframe grouped by time series ID, i.e. by ``index_cols``. :param pdf: The ``pandas.DataFrame`` containing the training data. Should be a single time series. :param index_cols: The list of column names used to index all the time series in the dataset. Not used for modeling. :param time_col: The name of the column containing the timestamps. :param target_col: The name of the column whose value we wish to forecast. :param time_stamps: The timestamps at which we would like to obtain a forecast. :param model: The model (or model ``dict``) we are using to obtain a forecast. :param predict_on_train: Whether to return the model's prediction on the training data. :param agg_dict: A dictionary used to specify how different data columns should be aggregated. If a non-target data column is not in agg_dict, we do not model it for aggregated time series. :return: A ``pandas.DataFrame`` with the forecast & its standard error (NaN if the model doesn't have error bars). Columns are ``[*index_cols, time_col, target_col, target_col + \"_err\"]``. |
251 | import logging
import traceback
from typing import List, Union
import numpy as np
import pandas as pd
from merlion.models.factory import instantiate_or_copy_model, ModelFactory
from merlion.models.anomaly.base import DetectorBase
from merlion.models.forecast.base import ForecasterBase
from merlion.spark.dataset import TSID_COL_NAME
from merlion.utils import TimeSeries, to_pd_datetime
logger = logging.getLogger(__name__)
class ModelFactory:
def get_model_class(cls, name: str) -> Type[ModelBase]:
return dynamic_import(name, import_alias)
def create(cls, name, return_unused_kwargs=False, **kwargs) -> Union[ModelBase, Tuple[ModelBase, Dict]]:
model_class = cls.get_model_class(name)
config, kwargs = model_class.config_class.from_dict(kwargs, return_unused_kwargs=True)
# initialize the model
signature = inspect.signature(model_class)
init_kwargs = {k: v for k, v in kwargs.items() if k in signature.parameters}
kwargs = {k: v for k, v in kwargs.items() if k not in init_kwargs}
model = model_class(config=config, **init_kwargs)
# set model state with remaining kwargs, and return any unused kwargs if desired
if return_unused_kwargs:
state = {k: v for k, v in kwargs.items() if hasattr(model, k)}
model._load_state(state)
return model, {k: v for k, v in kwargs.items() if k not in state}
model._load_state(kwargs)
return model
def load(cls, name, model_path, **kwargs) -> ModelBase:
if model_path is None:
return cls.create(name, **kwargs)
else:
model_class = cls.get_model_class(name)
return model_class.load(model_path, **kwargs)
def load_bytes(cls, obj, **kwargs) -> ModelBase:
name = dill.loads(obj)[0]
model_class = cls.get_model_class(name)
return model_class.from_bytes(obj, **kwargs)
def instantiate_or_copy_model(model: Union[dict, ModelBase]):
if isinstance(model, ModelBase):
return copy.deepcopy(model)
elif isinstance(model, dict):
try:
return ModelFactory.create(**model)
except Exception as e:
logger.error(f"Invalid `dict` specifying a model config.\n\nGot {model}")
raise e
else:
raise TypeError(f"Expected model to be a `dict` or `ModelBase`. Got {model}")
class DetectorBase(ModelBase):
"""
Base class for an anomaly detection model.
"""
config_class = DetectorConfig
def __init__(self, config: DetectorConfig):
"""
:param config: model configuration
"""
super().__init__(config)
def _default_post_rule_train_config(self):
"""
:return: the default config to use when training the post-rule.
"""
from merlion.evaluate.anomaly import TSADMetric
t = self.config._default_threshold.alm_threshold
# self.calibrator is only None if calibration has been manually disabled
# and the anomaly scores are expected to be calibrated by get_anomaly_score(). If
# self.config.enable_calibrator, the model will return a calibrated score.
if self.calibrator is None or self.config.enable_calibrator or t == 0:
q = None
# otherwise, choose the quantile corresponding to the given threshold
else:
q = 2 * norm.cdf(t) - 1
return dict(metric=TSADMetric.F1, unsup_quantile=q)
def threshold(self):
return self.config.threshold
def threshold(self, threshold):
self.config.threshold = threshold
def calibrator(self):
return self.config.calibrator
def post_rule(self):
return self.config.post_rule
def train(
self, train_data: TimeSeries, train_config=None, anomaly_labels: TimeSeries = None, post_rule_train_config=None
) -> TimeSeries:
"""
Trains the anomaly detector (unsupervised) and its post-rule (supervised, if labels are given) on train data.
:param train_data: a `TimeSeries` of metric values to train the model.
:param train_config: Additional training configs, if needed. Only required for some models.
:param anomaly_labels: a `TimeSeries` indicating which timestamps are anomalous. Optional.
:param post_rule_train_config: The config to use for training the model's post-rule. The model's default
post-rule train config is used if none is supplied here.
:return: A `TimeSeries` of the model's anomaly scores on the training data.
"""
if train_config is None:
train_config = copy.deepcopy(self._default_train_config)
train_data = self.train_pre_process(train_data)
train_data = train_data.to_pd() if self._pandas_train else train_data
train_result = call_with_accepted_kwargs( # For ensembles
self._train, train_data=train_data, train_config=train_config, anomaly_labels=anomaly_labels
)
return self.train_post_process(
train_result=train_result, anomaly_labels=anomaly_labels, post_rule_train_config=post_rule_train_config
)
def train_post_process(
self, train_result: Union[TimeSeries, pd.DataFrame], anomaly_labels=None, post_rule_train_config=None
) -> TimeSeries:
"""
Converts the train result (anom scores on train data) into a TimeSeries object and trains the post-rule.
:param train_result: Raw anomaly scores on the training data.
:param anomaly_labels: a `TimeSeries` indicating which timestamps are anomalous. Optional.
:param post_rule_train_config: The config to use for training the model's post-rule. The model's default
post-rule train config is used if none is supplied here.
"""
anomaly_scores = UnivariateTimeSeries.from_pd(train_result, name="anom_score").to_ts()
if self.post_rule is not None:
kwargs = copy.copy(self._default_post_rule_train_config)
if post_rule_train_config is not None:
kwargs.update(post_rule_train_config)
kwargs.update(anomaly_scores=anomaly_scores, anomaly_labels=anomaly_labels)
call_with_accepted_kwargs(self.post_rule.train, **kwargs)
return anomaly_scores
def _train(self, train_data: pd.DataFrame, train_config=None) -> pd.DataFrame:
raise NotImplementedError
def _get_anomaly_score(self, time_series: pd.DataFrame, time_series_prev: pd.DataFrame = None) -> pd.DataFrame:
raise NotImplementedError
def get_anomaly_score(self, time_series: TimeSeries, time_series_prev: TimeSeries = None) -> TimeSeries:
"""
Returns the model's predicted sequence of anomaly scores.
:param time_series: the `TimeSeries` we wish to predict anomaly scores
for.
:param time_series_prev: a `TimeSeries` immediately preceding
``time_series``. If given, we use it to initialize the time series
anomaly detection model. Otherwise, we assume that ``time_series``
immediately follows the training data.
:return: a univariate `TimeSeries` of anomaly scores
"""
# Ensure the dimensions are correct
assert (
time_series.dim == self.dim
), f"Expected time_series to have dimension {self.dim}, but got {time_series.dim}."
if time_series_prev is not None:
assert (
time_series_prev.dim == self.dim
), f"Expected time_series_prev to have dimension {self.dim}, but got {time_series_prev.dim}."
# Transform the time series
time_series, time_series_prev = self.transform_time_series(time_series, time_series_prev)
if self.require_univariate:
assert time_series.dim == 1, (
f"{type(self).__name__} model only accepts univariate time series, but time series "
f"(after transform {self.transform}) has dimension {time_series.dim}."
)
time_series = time_series.to_pd()
if time_series_prev is not None:
time_series_prev = time_series_prev.to_pd()
# Get the anomaly scores & ensure the dimensions are correct
anom_scores = self._get_anomaly_score(time_series, time_series_prev)
assert anom_scores.shape[1] == 1, f"Expected anomaly scores returned by {type(self)} to be univariate."
return UnivariateTimeSeries.from_pd(anom_scores, name="anom_score").to_ts()
def get_anomaly_label(self, time_series: TimeSeries, time_series_prev: TimeSeries = None) -> TimeSeries:
"""
Returns the model's predicted sequence of anomaly scores, processed
by any relevant post-rules (calibration and/or thresholding).
:param time_series: the `TimeSeries` we wish to predict anomaly scores
for.
:param time_series_prev: a `TimeSeries` immediately preceding
``time_series``. If given, we use it to initialize the time series
anomaly detection model. Otherwise, we assume that ``time_series``
immediately follows the training data.
:return: a univariate `TimeSeries` of anomaly scores, filtered by the
model's post-rule
"""
scores = self.get_anomaly_score(time_series, time_series_prev)
return self.post_rule(scores) if self.post_rule is not None else scores
def get_figure(
self,
time_series: TimeSeries,
time_series_prev: TimeSeries = None,
*,
filter_scores=True,
plot_time_series_prev=False,
fig: Figure = None,
**kwargs,
) -> Figure:
"""
:param time_series: The `TimeSeries` we wish to plot & predict anomaly scores for.
:param time_series_prev: a `TimeSeries` immediately preceding
``time_stamps``. If given, we use it to initialize the time series
model. Otherwise, we assume that ``time_stamps`` immediately follows
the training data.
:param filter_scores: whether to filter the anomaly scores by the
post-rule before plotting them.
:param plot_time_series_prev: whether to plot ``time_series_prev`` (and
the model's fit for it). Only used if ``time_series_prev`` is given.
:param fig: a `Figure` we might want to add anomaly scores onto.
:return: a `Figure` of the model's anomaly score predictions.
"""
f = self.get_anomaly_label if filter_scores else self.get_anomaly_score
scores = f(time_series, time_series_prev=time_series_prev, **kwargs)
scores = scores.univariates[scores.names[0]]
# Get the severity level associated with each value & convert things to
# numpy arrays as well
assert time_series.dim == 1, (
f"Plotting only supported for univariate time series, but got a"
f"time series of dimension {time_series.dim}"
)
time_series = time_series.univariates[time_series.names[0]]
if fig is None:
if time_series_prev is not None and plot_time_series_prev:
k = time_series_prev.names[0]
time_series_prev = time_series_prev.univariates[k]
elif not plot_time_series_prev:
time_series_prev = None
fig = Figure(y=time_series, y_prev=time_series_prev, anom=scores)
else:
if fig.y is None:
fig.y = time_series
fig.anom = scores
return fig
def plot_anomaly(
self,
time_series: TimeSeries,
time_series_prev: TimeSeries = None,
*,
filter_scores=True,
plot_time_series_prev=False,
figsize=(1000, 600),
ax=None,
):
"""
Plots the time series in matplotlib as a line graph, with points in the
series overlaid as points color-coded to indicate their severity as
anomalies.
:param time_series: The `TimeSeries` we wish to plot & predict anomaly scores for.
:param time_series_prev: a `TimeSeries` immediately preceding
``time_series``. Plotted as context if given.
:param filter_scores: whether to filter the anomaly scores by the
post-rule before plotting them.
:param plot_time_series_prev: whether to plot ``time_series_prev`` (and
the model's fit for it). Only used if ``time_series_prev`` is given.
:param figsize: figure size in pixels
:param ax: matplotlib axes to add this plot to
:return: matplotlib figure & axes
"""
metric_name = time_series.names[0]
title = f"{type(self).__name__}: Anomalies in {metric_name}"
fig = self.get_figure(
time_series=time_series,
time_series_prev=time_series_prev,
filter_scores=filter_scores,
plot_time_series_prev=plot_time_series_prev,
)
return fig.plot(title=title, figsize=figsize, ax=ax)
def plot_anomaly_plotly(
self,
time_series: TimeSeries,
time_series_prev: TimeSeries = None,
*,
filter_scores=True,
plot_time_series_prev=False,
figsize=None,
):
"""
Plots the time series in plotly as a line graph, with points in the
series overlaid as points color-coded to indicate their severity as
anomalies.
:param time_series: The `TimeSeries` we wish to plot & predict anomaly scores for.
:param time_series_prev: a `TimeSeries` immediately preceding
``time_series``. Plotted as context if given.
:param filter_scores: whether to filter the anomaly scores by the
post-rule before plotting them.
:param plot_time_series_prev: whether to plot ``time_series_prev`` (and
the model's fit for it). Only used if ``time_series_prev`` is given.
:param figsize: figure size in pixels
:return: plotly figure
"""
title = f"{type(self).__name__}: Anomalies in Time Series"
f = self.get_anomaly_label if filter_scores else self.get_anomaly_score
scores = f(time_series, time_series_prev=time_series_prev)
fig = MTSFigure(y=time_series, y_prev=time_series_prev, anom=scores)
return fig.plot_plotly(title=title, figsize=figsize)
TSID_COL_NAME = "__ts_id"
The provided code snippet includes necessary dependencies for implementing the `anomaly` function. Write a Python function `def anomaly( pdf: pd.DataFrame, index_cols: List[str], time_col: str, train_test_split: Union[int, str], model: Union[DetectorBase, dict], predict_on_train: bool = False, ) -> pd.DataFrame` to solve the following problem:
Pyspark pandas UDF for performing anomaly detection. Should be called on a pyspark dataframe grouped by time series ID, i.e. by ``index_cols``. :param pdf: The ``pandas.DataFrame`` containing the training and testing data. Should be a single time series. :param index_cols: The list of column names used to index all the time series in the dataset. Not used for modeling. :param time_col: The name of the column containing the timestamps. :param train_test_split: The time at which the testing data starts. :param model: The model (or model ``dict``) we are using to predict anomaly scores. :param predict_on_train: Whether to return the model's prediction on the training data. :return: A ``pandas.DataFrame`` with the anomaly scores on the test data. Columns are ``[*index_cols, time_col, \"anom_score\"]``.
Here is the function:
def anomaly(
pdf: pd.DataFrame,
index_cols: List[str],
time_col: str,
train_test_split: Union[int, str],
model: Union[DetectorBase, dict],
predict_on_train: bool = False,
) -> pd.DataFrame:
"""
Pyspark pandas UDF for performing anomaly detection.
Should be called on a pyspark dataframe grouped by time series ID, i.e. by ``index_cols``.
:param pdf: The ``pandas.DataFrame`` containing the training and testing data. Should be a single time series.
:param index_cols: The list of column names used to index all the time series in the dataset. Not used for modeling.
:param time_col: The name of the column containing the timestamps.
:param train_test_split: The time at which the testing data starts.
:param model: The model (or model ``dict``) we are using to predict anomaly scores.
:param predict_on_train: Whether to return the model's prediction on the training data.
:return: A ``pandas.DataFrame`` with the anomaly scores on the test data.
Columns are ``[*index_cols, time_col, \"anom_score\"]``.
"""
# Sort the dataframe by time & turn it into a Merlion time series
if TSID_COL_NAME not in index_cols and TSID_COL_NAME in pdf.columns:
index_cols = index_cols + [TSID_COL_NAME]
pdf = pdf.sort_values(by=time_col)
ts = TimeSeries.from_pd(pdf.drop(columns=index_cols).set_index(time_col))
# Create model
model = instantiate_or_copy_model(model or {"name": "DefaultDetector"})
if not isinstance(model, DetectorBase):
raise TypeError(f"Expected `model` to be an instance of DetectorBase, but got {model}.")
# Train model & run inference
exception = False
train, test = ts.bisect(train_test_split, t_in_left=False)
try:
train_pred = model.train(train)
train_pred = model.post_rule(train_pred).to_pd()
pred = model.get_anomaly_label(test).to_pd()
except Exception:
exception = True
row0 = pdf.iloc[0]
idx = ", ".join(f"{k} = {row0[k]}" for k in index_cols)
logger.warning(
f"Model {type(model).__name__} threw an exception on ({idx}). {traceback.format_exc()}"
f"Trying StatThreshold model instead.\n"
)
if exception:
try:
model = ModelFactory.create(name="StatThreshold", target_seq_index=0, threshold=model.threshold)
train_pred = model.train(train)
train_pred = model.post_rule(train_pred).to_pd()
pred = model.get_anomaly_label(test).to_pd()
except Exception:
logger.warning(
f"Model StatThreshold threw an exception on ({idx}).{traceback.format_exc()}"
f"Returning anomaly score = 0 as a placeholder.\n"
)
train_pred = pd.DataFrame(0, index=to_pd_datetime(train.time_stamps), columns=["anom_score"])
pred = pd.DataFrame(0, index=to_pd_datetime(test.time_stamps), columns=["anom_score"])
if predict_on_train and train_pred is not None:
pred = pd.concat((train_pred, pred))
# Turn the time index into a regular column, and add the index columns back to the prediction
pred.index.name = time_col
pred.reset_index(inplace=True)
index_pdf = pd.concat([pdf[index_cols].iloc[:1]] * len(pred), ignore_index=True)
return pd.concat((index_pdf, pred), axis=1) | Pyspark pandas UDF for performing anomaly detection. Should be called on a pyspark dataframe grouped by time series ID, i.e. by ``index_cols``. :param pdf: The ``pandas.DataFrame`` containing the training and testing data. Should be a single time series. :param index_cols: The list of column names used to index all the time series in the dataset. Not used for modeling. :param time_col: The name of the column containing the timestamps. :param train_test_split: The time at which the testing data starts. :param model: The model (or model ``dict``) we are using to predict anomaly scores. :param predict_on_train: Whether to return the model's prediction on the training data. :return: A ``pandas.DataFrame`` with the anomaly scores on the test data. Columns are ``[*index_cols, time_col, \"anom_score\"]``. |
252 | import logging
import traceback
from typing import List, Union
import numpy as np
import pandas as pd
from merlion.models.factory import instantiate_or_copy_model, ModelFactory
from merlion.models.anomaly.base import DetectorBase
from merlion.models.forecast.base import ForecasterBase
from merlion.spark.dataset import TSID_COL_NAME
from merlion.utils import TimeSeries, to_pd_datetime
TSID_COL_NAME = "__ts_id"
The provided code snippet includes necessary dependencies for implementing the `reconciliation` function. Write a Python function `def reconciliation(pdf: pd.DataFrame, hier_matrix: np.ndarray, target_col: str)` to solve the following problem:
Pyspark pandas UDF for computing the minimum-trace hierarchical time series reconciliation, as described by `Wickramasuriya et al. 2018 <https://robjhyndman.com/papers/mint.pdf>`__. Should be called on a pyspark dataframe grouped by timestamp. Pyspark implementation of `merlion.utils.hts.minT_reconciliation`. :param pdf: A ``pandas.DataFrame`` containing forecasted values & standard errors from ``m`` time series at a single timestamp. Each time series should be indexed by `TSID_COL_NAME`. The first ``n`` time series (in order of ID) orrespond to leaves of the hierarchy, while the remaining ``m - n`` are weighted sums of the first ``n``. This dataframe can be produced by calling `forecast` on the dataframe produced by `merlion.spark.dataset.create_hier_dataset`. :param hier_matrix: A ``m``-by-``n`` matrix describing how the hierarchy is aggregated. The value of the ``k``-th time series is ``np.dot(hier_matrix[k], pdf[:n])``. This matrix can be produced by `merlion.spark.dataset.create_hier_dataset`. :param target_col: The name of the column whose value we wish to forecast. :return: A ``pandas.DataFrame`` which replaces the original forecasts & errors with reconciled forecasts & errors. .. note:: Time series series reconciliation is skipped if the given timestamp has missing values for any of the time series. This can happen for training timestamps if the training time series has missing data and `forecast` is called with ``predict_on_train=true``.
Here is the function:
def reconciliation(pdf: pd.DataFrame, hier_matrix: np.ndarray, target_col: str):
"""
Pyspark pandas UDF for computing the minimum-trace hierarchical time series reconciliation, as described by
`Wickramasuriya et al. 2018 <https://robjhyndman.com/papers/mint.pdf>`__.
Should be called on a pyspark dataframe grouped by timestamp. Pyspark implementation of
`merlion.utils.hts.minT_reconciliation`.
:param pdf: A ``pandas.DataFrame`` containing forecasted values & standard errors from ``m`` time series at a single
timestamp. Each time series should be indexed by `TSID_COL_NAME`.
The first ``n`` time series (in order of ID) orrespond to leaves of the hierarchy, while the remaining ``m - n``
are weighted sums of the first ``n``.
This dataframe can be produced by calling `forecast` on the dataframe produced by
`merlion.spark.dataset.create_hier_dataset`.
:param hier_matrix: A ``m``-by-``n`` matrix describing how the hierarchy is aggregated. The value of the ``k``-th
time series is ``np.dot(hier_matrix[k], pdf[:n])``. This matrix can be produced by
`merlion.spark.dataset.create_hier_dataset`.
:param target_col: The name of the column whose value we wish to forecast.
:return: A ``pandas.DataFrame`` which replaces the original forecasts & errors with reconciled forecasts & errors.
.. note::
Time series series reconciliation is skipped if the given timestamp has missing values for any of the
time series. This can happen for training timestamps if the training time series has missing data and
`forecast` is called with ``predict_on_train=true``.
"""
# Get shape params & sort the data (for this timestamp) by time series ID.
m, n = hier_matrix.shape
assert len(pdf) <= m >= n
if len(pdf) < m:
return pdf
assert (hier_matrix[:n] == np.eye(n)).all()
pdf = pdf.sort_values(by=TSID_COL_NAME)
# Compute the error weight matrix W (m by m)
errname = f"{target_col}_err"
coefs = hier_matrix.sum(axis=1)
errs = pdf[errname].values if errname in pdf.columns else np.full(m, np.nan)
nan_errs = np.isnan(errs)
if nan_errs.all():
W = np.diag(coefs)
else:
if nan_errs.any():
errs[nan_errs] = np.nanmean(errs / coefs) * coefs[nan_errs]
W = np.diag(errs)
# Create other supplementary matrices
J = np.concatenate((np.eye(n), np.zeros((n, m - n))), axis=1)
U = np.concatenate((-hier_matrix[n:], np.eye(m - n)), axis=1) # U.T from the paper
# Compute projection matrix to compute coherent leaf forecasts
inv = np.linalg.pinv(U @ W @ U.T) # (m-n) by (m-n)
P = J - ((J @ W) @ U.T) @ (inv @ U) # n by m
# Compute reconciled forecasts & errors
rec = hier_matrix @ (P @ pdf[target_col].values)
if nan_errs.all():
rec_errs = errs
else:
# P * W.diagonal() is a faster way to compute P @ W, since W is diagonal
rec_errs = hier_matrix @ (P * W.diagonal()) # m by m
# np.sum(rec_errs ** 2, axis=1) is a faster way to compute (rec_errs @ rec_errs.T).diagonal()
rec_errs = np.sqrt(np.sum(rec_errs**2, axis=1))
# Replace original forecasts & errors with reconciled ones
reconciled = pd.DataFrame(np.stack([rec, rec_errs], axis=1), index=pdf.index, columns=[target_col, errname])
df = pd.concat((pdf.drop(columns=[target_col, errname]), reconciled), axis=1)
return df | Pyspark pandas UDF for computing the minimum-trace hierarchical time series reconciliation, as described by `Wickramasuriya et al. 2018 <https://robjhyndman.com/papers/mint.pdf>`__. Should be called on a pyspark dataframe grouped by timestamp. Pyspark implementation of `merlion.utils.hts.minT_reconciliation`. :param pdf: A ``pandas.DataFrame`` containing forecasted values & standard errors from ``m`` time series at a single timestamp. Each time series should be indexed by `TSID_COL_NAME`. The first ``n`` time series (in order of ID) orrespond to leaves of the hierarchy, while the remaining ``m - n`` are weighted sums of the first ``n``. This dataframe can be produced by calling `forecast` on the dataframe produced by `merlion.spark.dataset.create_hier_dataset`. :param hier_matrix: A ``m``-by-``n`` matrix describing how the hierarchy is aggregated. The value of the ``k``-th time series is ``np.dot(hier_matrix[k], pdf[:n])``. This matrix can be produced by `merlion.spark.dataset.create_hier_dataset`. :param target_col: The name of the column whose value we wish to forecast. :return: A ``pandas.DataFrame`` which replaces the original forecasts & errors with reconciled forecasts & errors. .. note:: Time series series reconciliation is skipped if the given timestamp has missing values for any of the time series. This can happen for training timestamps if the training time series has missing data and `forecast` is called with ``predict_on_train=true``. |
253 | from typing import Dict, List, Tuple
import numpy as np
import pandas as pd
try:
import pyspark.sql
import pyspark.sql.functions as F
from pyspark.sql.types import DateType, StringType, StructType
except ImportError as e:
err = (
"Try installing Merlion with optional dependencies using `pip install salesforce-merlion[spark]` or "
"`pip install `salesforce-merlion[all]`"
)
raise ImportError(str(e) + ". " + err)
def add_tsid_column(
spark: pyspark.sql.SparkSession, df: pyspark.sql.DataFrame, index_cols: List[str]
) -> pyspark.sql.DataFrame:
"""
Adds the column `TSID_COL_NAME` to the dataframe, which assigns an integer ID to each time series in the dataset.
:param spark: The current SparkSession.
:param df: A pyspark dataframe containing all the data.
:param index_cols: The columns used to index the various time series in the dataset.
:return: The pyspark dataframe with an additional column `TSID_COL_NAME` added as the last column.
"""
if TSID_COL_NAME in df.schema.fieldNames():
return df
# If no index columns are specified, we are only dealing with a single time series
if index_cols is None or len(index_cols) == 0:
return df.join(spark.createDataFrame(pd.DataFrame([0], columns=[TSID_COL_NAME])))
# Compute time series IDs. Time series with any null indexes come last b/c these are aggregated time series.
ts_index = df.groupBy(index_cols).count().drop("count").toPandas()
null_rows = ts_index.isna().any(axis=1)
ts_index = pd.concat(
(
ts_index[~null_rows].sort_values(by=index_cols, axis=0, ascending=True),
ts_index[null_rows].sort_values(by=index_cols, axis=0, ascending=True),
),
axis=0,
)
ts_index[TSID_COL_NAME] = np.arange(len(ts_index))
# Add the time series ID column to the overall dataframe
ts_index = spark.createDataFrame(ts_index)
for i, col in enumerate(index_cols):
pred = df[col].eqNullSafe(ts_index[col])
condition = pred if i == 0 else condition & pred
df = df.join(ts_index, on=condition)
for col in index_cols:
df = df.drop(ts_index[col])
return df
The provided code snippet includes necessary dependencies for implementing the `read_dataset` function. Write a Python function `def read_dataset( spark: pyspark.sql.SparkSession, path: str, file_format: str = "csv", time_col: str = None, index_cols: List[str] = None, data_cols: List[str] = None, ) -> pyspark.sql.DataFrame` to solve the following problem:
Reads a time series dataset as a pyspark Dataframe. :param spark: The current SparkSession. :param path: The path at which the dataset is stored. :param file_format: The file format the dataset is stored in. :param time_col: The name of the column which specifies timestamp. If ``None`` is provided, it is assumed to be the first column which is not an index column or pre-specified data column. :param index_cols: The columns used to index the various time series in the dataset. If ``None`` is provided, we assume the entire dataset is just a single time series. :param data_cols: The columns we will use for downstream time series tasks. If ``None`` is provided, we use all columns that are not a time or index column. :return: A pyspark dataframe with columns ``[time_col, *index_cols, *data_cols, TSID_COL_NAME]`` (in that order).
Here is the function:
def read_dataset(
spark: pyspark.sql.SparkSession,
path: str,
file_format: str = "csv",
time_col: str = None,
index_cols: List[str] = None,
data_cols: List[str] = None,
) -> pyspark.sql.DataFrame:
"""
Reads a time series dataset as a pyspark Dataframe.
:param spark: The current SparkSession.
:param path: The path at which the dataset is stored.
:param file_format: The file format the dataset is stored in.
:param time_col: The name of the column which specifies timestamp. If ``None`` is provided, it is assumed to be the
first column which is not an index column or pre-specified data column.
:param index_cols: The columns used to index the various time series in the dataset. If ``None`` is provided, we
assume the entire dataset is just a single time series.
:param data_cols: The columns we will use for downstream time series tasks. If ``None`` is provided, we use all
columns that are not a time or index column.
:return: A pyspark dataframe with columns ``[time_col, *index_cols, *data_cols, TSID_COL_NAME]`` (in that order).
"""
# Read the dataset into a pyspark dataframe
df = spark.read.format(file_format).load(path, inferSchema=True, header=True)
# Only keep the index column, data columns, and time column
index_cols = index_cols or []
if time_col is None:
time_col = [c for c in df.schema.fieldNames() if c not in index_cols + (data_cols or [])][0]
# Use all non-index non-time columns as data columns data columns are not given
if data_cols is None or len(data_cols) == 0:
data_cols = [c for c in df.schema.fieldNames() if c not in index_cols + [time_col]]
assert all(col in data_cols and col not in index_cols + [time_col] for col in data_cols)
# Get the columns in the right order, convert index columns to string, and get data columns in the right order.
# Index cols are string because we indicate aggregation with a reserved "__aggregated__" string
df = df.select(
F.col(time_col).cast(DateType()).alias(time_col),
*[F.col(c).cast(StringType()).alias(c) for c in index_cols],
*data_cols,
)
# add TSID_COL_NAME to the end before returning
return add_tsid_column(spark=spark, df=df, index_cols=index_cols) | Reads a time series dataset as a pyspark Dataframe. :param spark: The current SparkSession. :param path: The path at which the dataset is stored. :param file_format: The file format the dataset is stored in. :param time_col: The name of the column which specifies timestamp. If ``None`` is provided, it is assumed to be the first column which is not an index column or pre-specified data column. :param index_cols: The columns used to index the various time series in the dataset. If ``None`` is provided, we assume the entire dataset is just a single time series. :param data_cols: The columns we will use for downstream time series tasks. If ``None`` is provided, we use all columns that are not a time or index column. :return: A pyspark dataframe with columns ``[time_col, *index_cols, *data_cols, TSID_COL_NAME]`` (in that order). |
254 | from typing import Dict, List, Tuple
import numpy as np
import pandas as pd
try:
import pyspark.sql
import pyspark.sql.functions as F
from pyspark.sql.types import DateType, StringType, StructType
except ImportError as e:
err = (
"Try installing Merlion with optional dependencies using `pip install salesforce-merlion[spark]` or "
"`pip install `salesforce-merlion[all]`"
)
raise ImportError(str(e) + ". " + err)
TSID_COL_NAME = "__ts_id"
The provided code snippet includes necessary dependencies for implementing the `write_dataset` function. Write a Python function `def write_dataset(df: pyspark.sql.DataFrame, time_col: str, path: str, file_format: str = "csv")` to solve the following problem:
Writes the dataset at the specified path. :param df: The dataframe to save. The dataframe must have a column `TSID_COL_NAME` indexing the time series in the dataset (this column is automatically added by `read_dataset`). :param time_col: The name of the column which specifies timestamp. :param path: The path to save the dataset at. :param file_format: The file format in which to save the dataset.
Here is the function:
def write_dataset(df: pyspark.sql.DataFrame, time_col: str, path: str, file_format: str = "csv"):
"""
Writes the dataset at the specified path.
:param df: The dataframe to save. The dataframe must have a column `TSID_COL_NAME`
indexing the time series in the dataset (this column is automatically added by `read_dataset`).
:param time_col: The name of the column which specifies timestamp.
:param path: The path to save the dataset at.
:param file_format: The file format in which to save the dataset.
"""
df = df.sort([TSID_COL_NAME, time_col]).drop(TSID_COL_NAME)
df.write.format(file_format).save(path, header=True, mode="overwrite") | Writes the dataset at the specified path. :param df: The dataframe to save. The dataframe must have a column `TSID_COL_NAME` indexing the time series in the dataset (this column is automatically added by `read_dataset`). :param time_col: The name of the column which specifies timestamp. :param path: The path to save the dataset at. :param file_format: The file format in which to save the dataset. |
255 | from typing import Dict, List, Tuple
import numpy as np
import pandas as pd
try:
import pyspark.sql
import pyspark.sql.functions as F
from pyspark.sql.types import DateType, StringType, StructType
except ImportError as e:
err = (
"Try installing Merlion with optional dependencies using `pip install salesforce-merlion[spark]` or "
"`pip install `salesforce-merlion[all]`"
)
raise ImportError(str(e) + ". " + err)
TSID_COL_NAME = "__ts_id"
The provided code snippet includes necessary dependencies for implementing the `create_hier_dataset` function. Write a Python function `def create_hier_dataset( spark: pyspark.sql.SparkSession, df: pyspark.sql.DataFrame, time_col: str = None, index_cols: List[str] = None, agg_dict: Dict = None, ) -> Tuple[pyspark.sql.DataFrame, np.ndarray]` to solve the following problem:
Aggregates the time series in the dataset & appends them to the original dataset. :param spark: The current SparkSession. :param df: A pyspark dataframe containing all the data. The dataframe must have a column `TSID_COL_NAME` indexing the time series in the dataset (this column is automatically added by `read_dataset`). :param time_col: The name of the column which specifies timestamp. If ``None`` is provided, it is assumed to be the first column which is not an index column or pre-specified data column. :param index_cols: The columns used to index the various time series in the dataset. If ``None`` is provided, we assume the entire dataset is just a single time series. These columns define the levels of the hierarchy. For example, if each time series represents sales and we have ``index_cols = ["store", "item"]``, we will first aggregate sales for all items sold at a particular store; then we will aggregate sales for all items at all stores. :param agg_dict: A dictionary used to specify how different data columns should be aggregated. If a data column is not in the dict, we aggregate using sum by default. :return: The dataset with additional time series corresponding to each level of the hierarchy, as well as a matrix specifying how the hierarchy is constructed.
Here is the function:
def create_hier_dataset(
spark: pyspark.sql.SparkSession,
df: pyspark.sql.DataFrame,
time_col: str = None,
index_cols: List[str] = None,
agg_dict: Dict = None,
) -> Tuple[pyspark.sql.DataFrame, np.ndarray]:
"""
Aggregates the time series in the dataset & appends them to the original dataset.
:param spark: The current SparkSession.
:param df: A pyspark dataframe containing all the data. The dataframe must have a column `TSID_COL_NAME`
indexing the time series in the dataset (this column is automatically added by `read_dataset`).
:param time_col: The name of the column which specifies timestamp. If ``None`` is provided, it is assumed to be the
first column which is not an index column or pre-specified data column.
:param index_cols: The columns used to index the various time series in the dataset. If ``None`` is provided, we
assume the entire dataset is just a single time series. These columns define the levels of the hierarchy.
For example, if each time series represents sales and we have ``index_cols = ["store", "item"]``, we will
first aggregate sales for all items sold at a particular store; then we will aggregate sales for all items at
all stores.
:param agg_dict: A dictionary used to specify how different data columns should be aggregated. If a data column
is not in the dict, we aggregate using sum by default.
:return: The dataset with additional time series corresponding to each level of the hierarchy, as well as a
matrix specifying how the hierarchy is constructed.
"""
# Determine which columns are index vs. data columns
index_cols = [] if index_cols is None else index_cols
index_cols = [c for c in index_cols if c != TSID_COL_NAME]
extended_index_cols = index_cols + [TSID_COL_NAME]
if time_col is None:
non_index_cols = [c for c in df.schema.fieldNames() if c not in extended_index_cols]
time_col = non_index_cols[0]
data_cols = non_index_cols[1:]
else:
data_cols = [c for c in df.schema.fieldNames() if c not in extended_index_cols + [time_col]]
# Create a pandas index for all the time series
ts_index = df.groupBy(extended_index_cols).count().drop("count").toPandas()
ts_index = ts_index.set_index(index_cols).sort_index()
index_schema = StructType([df.schema[c] for c in extended_index_cols])
n = len(ts_index)
# Add all higher levels of the hierarchy
full_df = df
hier_vecs = []
# Compose the aggregation portions of the SQL select statements below
df.createOrReplaceTempView("df")
agg_dict = {} if agg_dict is None else agg_dict
data_col_sql = [f"{agg_dict.get(c, 'sum').upper()}(`{c}`) AS `{c}`" for c in data_cols]
for k in range(len(index_cols)):
# Aggregate values of data columns over the last k+1 index column values.
gb_cols = index_cols[: -(k + 1)]
gb_col_sql = [f"`{c}`" for c in [time_col] + gb_cols]
agg = spark.sql(f"SELECT {','.join(gb_col_sql + data_col_sql)} FROM df GROUP BY {','.join(gb_col_sql)};")
# Add back dummy NA values for the index columns we aggregated over, add a time series ID column,
# concatenate the aggregated time series to the full dataframe, and compute the hierarchy vector.
# For the top level of the hierarchy, this is easy as we just sum everything
if len(gb_cols) == 0:
dummy = [["__aggregated__"] * len(index_cols) + [n + len(hier_vecs)]]
full_df = full_df.unionByName(agg.join(spark.createDataFrame(dummy, schema=index_schema)))
hier_vecs.append(np.ones(n))
continue
# For lower levels of the hierarchy, we determine the membership of each grouping to create
# the appropriate dummy entries and hierarchy vectors.
dummy = []
for i, (group, group_idxs) in enumerate(ts_index.groupby(gb_cols).groups.items()):
group = [group] if len(gb_cols) == 1 else list(group)
locs = [ts_index.index.get_loc(j) for j in group_idxs]
dummy.append(group + ["__aggregated__"] * (k + 1) + [n + len(hier_vecs)])
x = np.zeros(n)
x[locs] = 1
hier_vecs.append(x)
dummy = spark.createDataFrame(dummy, schema=index_schema)
full_df = full_df.unionByName(agg.join(dummy, on=gb_cols))
# Create the full hierarchy matrix, and return it along with the updated dataframe
hier_matrix = np.concatenate([np.eye(n), np.stack(hier_vecs)])
return full_df, hier_matrix | Aggregates the time series in the dataset & appends them to the original dataset. :param spark: The current SparkSession. :param df: A pyspark dataframe containing all the data. The dataframe must have a column `TSID_COL_NAME` indexing the time series in the dataset (this column is automatically added by `read_dataset`). :param time_col: The name of the column which specifies timestamp. If ``None`` is provided, it is assumed to be the first column which is not an index column or pre-specified data column. :param index_cols: The columns used to index the various time series in the dataset. If ``None`` is provided, we assume the entire dataset is just a single time series. These columns define the levels of the hierarchy. For example, if each time series represents sales and we have ``index_cols = ["store", "item"]``, we will first aggregate sales for all items sold at a particular store; then we will aggregate sales for all items at all stores. :param agg_dict: A dictionary used to specify how different data columns should be aggregated. If a data column is not in the dict, we aggregate using sum by default. :return: The dataset with additional time series corresponding to each level of the hierarchy, as well as a matrix specifying how the hierarchy is constructed. |
256 | import logging
from typing import Dict
from copy import copy
from matplotlib.colors import to_rgb
from matplotlib.dates import AutoDateLocator, AutoDateFormatter
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotly
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from merlion.utils import TimeSeries, UnivariateTimeSeries
The provided code snippet includes necessary dependencies for implementing the `plot_anoms` function. Write a Python function `def plot_anoms(ax: plt.Axes, anomaly_labels: TimeSeries)` to solve the following problem:
Plots anomalies as pink windows on the matplotlib ``Axes`` object ``ax``.
Here is the function:
def plot_anoms(ax: plt.Axes, anomaly_labels: TimeSeries):
"""
Plots anomalies as pink windows on the matplotlib ``Axes`` object ``ax``.
"""
if anomaly_labels is None:
return ax
anomaly_labels = anomaly_labels.to_pd()
t, y = anomaly_labels.index, anomaly_labels.values
splits = np.where(y[1:] != y[:-1])[0] + 1
splits = np.concatenate(([0], splits, [len(y) - 1]))
for k in range(len(splits) - 1):
if y[splits[k]]: # If splits[k] is anomalous
ax.axvspan(t[splits[k]], t[splits[k + 1]], color="#e07070", alpha=0.5)
return ax | Plots anomalies as pink windows on the matplotlib ``Axes`` object ``ax``. |
257 | import logging
from typing import Dict
from copy import copy
from matplotlib.colors import to_rgb
from matplotlib.dates import AutoDateLocator, AutoDateFormatter
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotly
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from merlion.utils import TimeSeries, UnivariateTimeSeries
The provided code snippet includes necessary dependencies for implementing the `plot_anoms_plotly` function. Write a Python function `def plot_anoms_plotly(fig, anomaly_labels: TimeSeries)` to solve the following problem:
Plots anomalies as pink windows on the plotly ``Figure`` object ``fig``.
Here is the function:
def plot_anoms_plotly(fig, anomaly_labels: TimeSeries):
"""
Plots anomalies as pink windows on the plotly ``Figure`` object ``fig``.
"""
if anomaly_labels is None:
return fig
anomaly_labels = anomaly_labels.to_pd()
t, y = anomaly_labels.index, anomaly_labels.values
splits = np.where(y[1:] != y[:-1])[0] + 1
splits = np.concatenate(([0], splits, [len(y) - 1]))
for k in range(len(splits) - 1):
if y[splits[k]]: # If splits[k] is anomalous
fig.add_vrect(t[splits[k]], t[splits[k + 1]], line_width=0, fillcolor="#e07070", opacity=0.4)
return fig | Plots anomalies as pink windows on the plotly ``Figure`` object ``fig``. |
258 | import json
import logging
import os
import traceback
import numpy as np
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.pages.data import create_stats_table, create_metric_stats_table
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.data import DataAnalyzer
file_manager = FileManager()
def upload_file(filenames, contents):
name = None
if filenames is not None and contents is not None:
for name, data in zip(filenames, contents):
file_manager.save_file(name, data)
options = []
files = file_manager.uploaded_files()
for filename in files:
options.append({"label": filename, "value": filename})
return options, name | null |
259 | import json
import logging
import os
import traceback
import numpy as np
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.pages.data import create_stats_table, create_metric_stats_table
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.data import DataAnalyzer
logger = logging.getLogger(__name__)
file_manager = FileManager()
class DefaultEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.bool_):
return str(obj)
if isinstance(obj, np.integer):
return int(obj)
if isinstance(obj, np.floating):
return float(obj)
if isinstance(obj, np.ndarray):
return obj.tolist()
return super().default(obj)
Output("select-file", "options"),
Output("select-file", "value"),
[Input("upload-data", "filename"), Input("upload-data", "contents")],
def create_stats_table(data_stats=None):
if data_stats is None or len(data_stats) == 0:
data = [{"Stats": "", "Value": ""}]
else:
data = [{"Stats": key, "Value": value} for key, value in data_stats["@global"].items()]
table = dash_table.DataTable(
id="data-stats",
data=data,
columns=[{"id": "Stats", "name": "Stats"}, {"id": "Value", "name": "Value"}],
editable=False,
style_header_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_cell_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_header=dict(backgroundColor=TABLE_HEADER_COLOR, color="white"),
style_data=dict(backgroundColor=TABLE_DATA_COLOR),
)
return table
class DataAnalyzer(DataMixin):
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_stats(df):
stats = {
"@global": OrderedDict(
{
"NO. of Variables": len(df.columns),
"Time Series Length": len(df),
"Has NaNs": bool(df.isnull().values.any()),
}
),
"@columns": list(df.columns),
}
for col in df.columns:
stats[col] = df[col].describe().to_dict(into=OrderedDict)
return stats
def get_data_table(df):
return data_table(df)
def get_data_figure(df):
if df is None:
return create_empty_figure()
else:
return plot_timeseries(df)
def click_run(btn_click, modal_close, filename, data):
ctx = dash.callback_context
stats = json.loads(data) if data is not None else {}
stats_table = create_stats_table()
data_table = DataAnalyzer.get_data_table(df=None)
data_figure = DataAnalyzer.get_data_figure(df=None)
modal_is_open = False
modal_content = ""
prop_id = ctx.triggered_id
if prop_id == "data-btn" and btn_click > 0:
try:
assert filename, "Please select a file to load."
file_path = os.path.join(file_manager.data_directory, filename)
df = DataAnalyzer().load_data(file_path)
stats = DataAnalyzer.get_stats(df)
stats_table = create_stats_table(stats)
data_table = DataAnalyzer.get_data_table(df)
data_figure = DataAnalyzer.get_data_figure(df)
except Exception:
error = traceback.format_exc()
modal_is_open = True
modal_content = error
logger.error(error)
return stats_table, json.dumps(stats, cls=DefaultEncoder), data_table, data_figure, modal_is_open, modal_content | null |
260 | import json
import logging
import os
import traceback
import numpy as np
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.pages.data import create_stats_table, create_metric_stats_table
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.data import DataAnalyzer
def update_metric_dropdown(n_clicks, data):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "select-column-parent":
stats = json.loads(data)
options += [{"label": s, "value": s} for s in stats.keys() if s.find("@") == -1]
return options | null |
261 | import json
import logging
import os
import traceback
import numpy as np
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.pages.data import create_stats_table, create_metric_stats_table
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.data import DataAnalyzer
def create_metric_stats_table(metric_stats=None, column=None):
def update_metric_table(column, data):
ctx = dash.callback_context
metric_stats_table = create_metric_stats_table()
prop_id = ctx.triggered_id
if prop_id == "select-column":
stats = json.loads(data)
metric_stats_table = create_metric_stats_table(stats, column)
return metric_stats_table | null |
262 | import json
import logging
import os
import traceback
import numpy as np
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.pages.data import create_stats_table, create_metric_stats_table
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.data import DataAnalyzer
file_manager = FileManager()
def select_download_parent(n_clicks):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "data-download-parent":
models = file_manager.get_model_list()
options += [{"label": s, "value": s} for s in models]
return options | null |
263 | import json
import logging
import os
import traceback
import numpy as np
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.pages.data import create_stats_table, create_metric_stats_table
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.data import DataAnalyzer
logger = logging.getLogger(__name__)
file_manager = FileManager()
def click_run(btn_click, modal_close, model):
ctx = dash.callback_context
modal_is_open = False
modal_content = ""
data = None
prop_id = ctx.triggered_id
if prop_id == "data-download-btn" and btn_click > 0:
try:
assert model, "Please select the model to download."
path = file_manager.get_model_download_path(model)
data = dcc.send_file(path)
except Exception:
error = traceback.format_exc()
modal_is_open = True
modal_content = error
logger.error(error)
return data, modal_is_open, modal_content | null |
264 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, dcc, callback
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.forecast import ForecastModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
file_manager = FileManager()
def update_select_file_dropdown(n_clicks, filename, target, features, exog):
options = []
ctx = dash.callback_context
if ctx.triggered:
prop_ids = {p["prop_id"].split(".")[0]: p["value"] for p in ctx.triggered}
if "forecasting-select-file-parent" in prop_ids:
files = file_manager.uploaded_files()
for f in files:
options.append({"label": f, "value": f})
if "forecasting-select-file" in prop_ids:
target, features, exog = None, None, None
return options, target, features, exog | null |
265 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, dcc, callback
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.forecast import ForecastModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
file_manager = FileManager()
def update_select_test_file_dropdown(n_clicks):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "forecasting-select-test-file-parent":
files = file_manager.uploaded_files()
for filename in files:
options.append({"label": filename, "value": filename})
return options | null |
266 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, dcc, callback
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.forecast import ForecastModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
file_manager = FileManager()
class ForecastModel(ModelMixin, DataMixin):
algorithms = [
"DefaultForecaster",
"Arima",
"LGBMForecaster",
"ETS",
"AutoETS",
"Prophet",
"AutoProphet",
"Sarima",
"VectorAR",
"RandomForestForecaster",
"ExtraTreesForecaster",
]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms():
return ForecastModel.algorithms
def _compute_metrics(evaluator, ts, predictions):
return {
m: round(evaluator.evaluate(ground_truth=ts, predict=predictions, metric=ForecastMetric[m]), 5)
for m in ["MAE", "MARRE", "RMSE", "sMAPE", "RMSPE"]
}
def train(self, algorithm, train_df, test_df, target_column, feature_columns, exog_columns, params, set_progress):
if target_column not in train_df:
target_column = int(target_column)
assert target_column in train_df, f"The target variable {target_column} is not in the time series."
try:
feature_columns = [int(c) if c not in train_df else c for c in feature_columns]
except ValueError:
feature_columns = []
try:
exog_columns = [int(c) if c not in train_df else c for c in exog_columns]
except ValueError:
exog_columns = []
for exog_column in exog_columns:
assert exog_column in train_df, f"Exogenous variable {exog_column} is not in the time series."
# Re-arrange dataframe so that the target column is first, and exogenous columns are last
columns = [target_column] + feature_columns + exog_columns
train_df = train_df.loc[:, columns]
test_df = test_df.loc[:, columns]
# Get the target_seq_index & initialize the model
params["target_seq_index"] = columns.index(target_column)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
# Handle exogenous regressors if they are supported by the model
if model.supports_exog and len(exog_columns) > 0:
exog_ts = TimeSeries.from_pd(pd.concat((train_df.loc[:, exog_columns], test_df.loc[:, exog_columns])))
train_df = train_df.loc[:, [target_column] + feature_columns]
test_df = test_df.loc[:, [target_column] + feature_columns]
else:
exog_ts = None
self.logger.info(f"Training the forecasting model: {algorithm}...")
set_progress(("2", "10"))
train_ts = TimeSeries.from_pd(train_df)
predictions = model.train(train_ts, exog_data=exog_ts)
if isinstance(predictions, tuple):
predictions = predictions[0]
self.logger.info("Computing training performance metrics...")
set_progress(("6", "10"))
evaluator = ForecastEvaluator(model, config=ForecastEvaluator.config_class())
train_metrics = ForecastModel._compute_metrics(evaluator, train_ts, predictions)
set_progress(("7", "10"))
test_ts = TimeSeries.from_pd(test_df)
if "max_forecast_steps" in params and params["max_forecast_steps"] is not None:
n = min(len(test_ts) - 1, int(params["max_forecast_steps"]))
test_ts, _ = test_ts.bisect(t=test_ts.time_stamps[n])
self.logger.info("Computing test performance metrics...")
test_pred, test_err = model.forecast(time_stamps=test_ts.time_stamps, exog_data=exog_ts)
test_metrics = ForecastModel._compute_metrics(evaluator, test_ts, test_pred)
set_progress(("8", "10"))
self.logger.info("Plotting forecasting results...")
figure = model.plot_forecast_plotly(
time_series=test_ts, time_series_prev=train_ts, exog_data=exog_ts, plot_forecast_uncertainty=True
)
figure.update_layout(width=None, height=500)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def select_target(n_clicks, filename, feat_names, exog_names):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "forecasting-select-target-parent":
if filename is not None:
file_path = os.path.join(file_manager.data_directory, filename)
df = ForecastModel().load_data(file_path, nrows=2)
forbidden = (feat_names or []) + (exog_names or [])
options += [{"label": s, "value": s} for s in df.columns if s not in forbidden]
return options | null |
267 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, dcc, callback
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.forecast import ForecastModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
file_manager = FileManager()
class ForecastModel(ModelMixin, DataMixin):
algorithms = [
"DefaultForecaster",
"Arima",
"LGBMForecaster",
"ETS",
"AutoETS",
"Prophet",
"AutoProphet",
"Sarima",
"VectorAR",
"RandomForestForecaster",
"ExtraTreesForecaster",
]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms():
return ForecastModel.algorithms
def _compute_metrics(evaluator, ts, predictions):
return {
m: round(evaluator.evaluate(ground_truth=ts, predict=predictions, metric=ForecastMetric[m]), 5)
for m in ["MAE", "MARRE", "RMSE", "sMAPE", "RMSPE"]
}
def train(self, algorithm, train_df, test_df, target_column, feature_columns, exog_columns, params, set_progress):
if target_column not in train_df:
target_column = int(target_column)
assert target_column in train_df, f"The target variable {target_column} is not in the time series."
try:
feature_columns = [int(c) if c not in train_df else c for c in feature_columns]
except ValueError:
feature_columns = []
try:
exog_columns = [int(c) if c not in train_df else c for c in exog_columns]
except ValueError:
exog_columns = []
for exog_column in exog_columns:
assert exog_column in train_df, f"Exogenous variable {exog_column} is not in the time series."
# Re-arrange dataframe so that the target column is first, and exogenous columns are last
columns = [target_column] + feature_columns + exog_columns
train_df = train_df.loc[:, columns]
test_df = test_df.loc[:, columns]
# Get the target_seq_index & initialize the model
params["target_seq_index"] = columns.index(target_column)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
# Handle exogenous regressors if they are supported by the model
if model.supports_exog and len(exog_columns) > 0:
exog_ts = TimeSeries.from_pd(pd.concat((train_df.loc[:, exog_columns], test_df.loc[:, exog_columns])))
train_df = train_df.loc[:, [target_column] + feature_columns]
test_df = test_df.loc[:, [target_column] + feature_columns]
else:
exog_ts = None
self.logger.info(f"Training the forecasting model: {algorithm}...")
set_progress(("2", "10"))
train_ts = TimeSeries.from_pd(train_df)
predictions = model.train(train_ts, exog_data=exog_ts)
if isinstance(predictions, tuple):
predictions = predictions[0]
self.logger.info("Computing training performance metrics...")
set_progress(("6", "10"))
evaluator = ForecastEvaluator(model, config=ForecastEvaluator.config_class())
train_metrics = ForecastModel._compute_metrics(evaluator, train_ts, predictions)
set_progress(("7", "10"))
test_ts = TimeSeries.from_pd(test_df)
if "max_forecast_steps" in params and params["max_forecast_steps"] is not None:
n = min(len(test_ts) - 1, int(params["max_forecast_steps"]))
test_ts, _ = test_ts.bisect(t=test_ts.time_stamps[n])
self.logger.info("Computing test performance metrics...")
test_pred, test_err = model.forecast(time_stamps=test_ts.time_stamps, exog_data=exog_ts)
test_metrics = ForecastModel._compute_metrics(evaluator, test_ts, test_pred)
set_progress(("8", "10"))
self.logger.info("Plotting forecasting results...")
figure = model.plot_forecast_plotly(
time_series=test_ts, time_series_prev=train_ts, exog_data=exog_ts, plot_forecast_uncertainty=True
)
figure.update_layout(width=None, height=500)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def select_features(n_clicks, filename, target_name, exog_names):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "forecasting-select-features-parent":
if filename is not None and target_name is not None:
file_path = os.path.join(file_manager.data_directory, filename)
df = ForecastModel().load_data(file_path, nrows=2)
options += [{"label": s, "value": s} for s in df.columns if s not in [target_name] + (exog_names or [])]
return options | null |
268 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, dcc, callback
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.forecast import ForecastModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
file_manager = FileManager()
class ForecastModel(ModelMixin, DataMixin):
def __init__(self):
def get_available_algorithms():
def _compute_metrics(evaluator, ts, predictions):
def train(self, algorithm, train_df, test_df, target_column, feature_columns, exog_columns, params, set_progress):
def select_exog(n_clicks, filename, target_name, feat_names):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "forecasting-select-exog-parent":
if filename is not None and target_name is not None:
file_path = os.path.join(file_manager.data_directory, filename)
df = ForecastModel().load_data(file_path, nrows=2)
options += [{"label": s, "value": s} for s in df.columns if s not in [target_name] + (feat_names or [])]
return options | null |
269 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, dcc, callback
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.forecast import ForecastModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
class ForecastModel(ModelMixin, DataMixin):
algorithms = [
"DefaultForecaster",
"Arima",
"LGBMForecaster",
"ETS",
"AutoETS",
"Prophet",
"AutoProphet",
"Sarima",
"VectorAR",
"RandomForestForecaster",
"ExtraTreesForecaster",
]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms():
return ForecastModel.algorithms
def _compute_metrics(evaluator, ts, predictions):
return {
m: round(evaluator.evaluate(ground_truth=ts, predict=predictions, metric=ForecastMetric[m]), 5)
for m in ["MAE", "MARRE", "RMSE", "sMAPE", "RMSPE"]
}
def train(self, algorithm, train_df, test_df, target_column, feature_columns, exog_columns, params, set_progress):
if target_column not in train_df:
target_column = int(target_column)
assert target_column in train_df, f"The target variable {target_column} is not in the time series."
try:
feature_columns = [int(c) if c not in train_df else c for c in feature_columns]
except ValueError:
feature_columns = []
try:
exog_columns = [int(c) if c not in train_df else c for c in exog_columns]
except ValueError:
exog_columns = []
for exog_column in exog_columns:
assert exog_column in train_df, f"Exogenous variable {exog_column} is not in the time series."
# Re-arrange dataframe so that the target column is first, and exogenous columns are last
columns = [target_column] + feature_columns + exog_columns
train_df = train_df.loc[:, columns]
test_df = test_df.loc[:, columns]
# Get the target_seq_index & initialize the model
params["target_seq_index"] = columns.index(target_column)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
# Handle exogenous regressors if they are supported by the model
if model.supports_exog and len(exog_columns) > 0:
exog_ts = TimeSeries.from_pd(pd.concat((train_df.loc[:, exog_columns], test_df.loc[:, exog_columns])))
train_df = train_df.loc[:, [target_column] + feature_columns]
test_df = test_df.loc[:, [target_column] + feature_columns]
else:
exog_ts = None
self.logger.info(f"Training the forecasting model: {algorithm}...")
set_progress(("2", "10"))
train_ts = TimeSeries.from_pd(train_df)
predictions = model.train(train_ts, exog_data=exog_ts)
if isinstance(predictions, tuple):
predictions = predictions[0]
self.logger.info("Computing training performance metrics...")
set_progress(("6", "10"))
evaluator = ForecastEvaluator(model, config=ForecastEvaluator.config_class())
train_metrics = ForecastModel._compute_metrics(evaluator, train_ts, predictions)
set_progress(("7", "10"))
test_ts = TimeSeries.from_pd(test_df)
if "max_forecast_steps" in params and params["max_forecast_steps"] is not None:
n = min(len(test_ts) - 1, int(params["max_forecast_steps"]))
test_ts, _ = test_ts.bisect(t=test_ts.time_stamps[n])
self.logger.info("Computing test performance metrics...")
test_pred, test_err = model.forecast(time_stamps=test_ts.time_stamps, exog_data=exog_ts)
test_metrics = ForecastModel._compute_metrics(evaluator, test_ts, test_pred)
set_progress(("8", "10"))
self.logger.info("Plotting forecasting results...")
figure = model.plot_forecast_plotly(
time_series=test_ts, time_series_prev=train_ts, exog_data=exog_ts, plot_forecast_uncertainty=True
)
figure.update_layout(width=None, height=500)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def select_algorithm_parent(n_clicks, selected_target):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "forecasting-select-algorithm-parent":
algorithms = ForecastModel.get_available_algorithms()
options += [{"label": s, "value": s} for s in algorithms]
return options | null |
270 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, dcc, callback
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.forecast import ForecastModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
class ForecastModel(ModelMixin, DataMixin):
algorithms = [
"DefaultForecaster",
"Arima",
"LGBMForecaster",
"ETS",
"AutoETS",
"Prophet",
"AutoProphet",
"Sarima",
"VectorAR",
"RandomForestForecaster",
"ExtraTreesForecaster",
]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms():
return ForecastModel.algorithms
def _compute_metrics(evaluator, ts, predictions):
return {
m: round(evaluator.evaluate(ground_truth=ts, predict=predictions, metric=ForecastMetric[m]), 5)
for m in ["MAE", "MARRE", "RMSE", "sMAPE", "RMSPE"]
}
def train(self, algorithm, train_df, test_df, target_column, feature_columns, exog_columns, params, set_progress):
if target_column not in train_df:
target_column = int(target_column)
assert target_column in train_df, f"The target variable {target_column} is not in the time series."
try:
feature_columns = [int(c) if c not in train_df else c for c in feature_columns]
except ValueError:
feature_columns = []
try:
exog_columns = [int(c) if c not in train_df else c for c in exog_columns]
except ValueError:
exog_columns = []
for exog_column in exog_columns:
assert exog_column in train_df, f"Exogenous variable {exog_column} is not in the time series."
# Re-arrange dataframe so that the target column is first, and exogenous columns are last
columns = [target_column] + feature_columns + exog_columns
train_df = train_df.loc[:, columns]
test_df = test_df.loc[:, columns]
# Get the target_seq_index & initialize the model
params["target_seq_index"] = columns.index(target_column)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
# Handle exogenous regressors if they are supported by the model
if model.supports_exog and len(exog_columns) > 0:
exog_ts = TimeSeries.from_pd(pd.concat((train_df.loc[:, exog_columns], test_df.loc[:, exog_columns])))
train_df = train_df.loc[:, [target_column] + feature_columns]
test_df = test_df.loc[:, [target_column] + feature_columns]
else:
exog_ts = None
self.logger.info(f"Training the forecasting model: {algorithm}...")
set_progress(("2", "10"))
train_ts = TimeSeries.from_pd(train_df)
predictions = model.train(train_ts, exog_data=exog_ts)
if isinstance(predictions, tuple):
predictions = predictions[0]
self.logger.info("Computing training performance metrics...")
set_progress(("6", "10"))
evaluator = ForecastEvaluator(model, config=ForecastEvaluator.config_class())
train_metrics = ForecastModel._compute_metrics(evaluator, train_ts, predictions)
set_progress(("7", "10"))
test_ts = TimeSeries.from_pd(test_df)
if "max_forecast_steps" in params and params["max_forecast_steps"] is not None:
n = min(len(test_ts) - 1, int(params["max_forecast_steps"]))
test_ts, _ = test_ts.bisect(t=test_ts.time_stamps[n])
self.logger.info("Computing test performance metrics...")
test_pred, test_err = model.forecast(time_stamps=test_ts.time_stamps, exog_data=exog_ts)
test_metrics = ForecastModel._compute_metrics(evaluator, test_ts, test_pred)
set_progress(("8", "10"))
self.logger.info("Plotting forecasting results...")
figure = model.plot_forecast_plotly(
time_series=test_ts, time_series_prev=train_ts, exog_data=exog_ts, plot_forecast_uncertainty=True
)
figure.update_layout(width=None, height=500)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def create_param_table(params=None, height=100):
if params is None or len(params) == 0:
data = [{"Parameter": "", "Value": ""}]
else:
data = [{"Parameter": key, "Value": str(value["default"])} for key, value in params.items()]
table = dash_table.DataTable(
data=data,
columns=[{"id": "Parameter", "name": "Parameter"}, {"id": "Value", "name": "Value"}],
editable=True,
style_header_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_cell_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_table={"overflowX": "scroll", "overflowY": "scroll", "height": height},
style_header=dict(backgroundColor=TABLE_HEADER_COLOR, color="white"),
style_data=dict(backgroundColor=TABLE_DATA_COLOR),
)
return table
def select_algorithm(algorithm):
param_table = create_param_table()
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "forecasting-select-algorithm":
param_info = ForecastModel.get_parameter_info(algorithm)
param_table = create_param_table(param_info)
return param_table | null |
271 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, dcc, callback
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.forecast import ForecastModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
logger = logging.getLogger(__name__)
file_manager = FileManager()
class ForecastModel(ModelMixin, DataMixin):
def __init__(self):
def get_available_algorithms():
def _compute_metrics(evaluator, ts, predictions):
def train(self, algorithm, train_df, test_df, target_column, feature_columns, exog_columns, params, set_progress):
def create_metric_table(metrics=None):
def create_empty_figure():
def click_train_test(
set_progress,
n_clicks,
modal_close,
filename,
target_col,
feature_cols,
exog_cols,
algorithm,
table,
train_percentage,
test_filename,
file_mode,
):
ctx = dash.callback_context
modal_is_open = False
modal_content = ""
train_metric_table = create_metric_table()
test_metric_table = create_metric_table()
figure = create_empty_figure()
set_progress(("0", "10"))
try:
if ctx.triggered and n_clicks > 0:
prop_id = ctx.triggered_id
if prop_id == "forecasting-train-btn":
assert filename, "The training data file is empty!"
assert target_col, "Please select a target variable/metric for forecasting."
assert algorithm, "Please select a forecasting algorithm."
feature_cols = feature_cols or []
exog_cols = exog_cols or []
df = ForecastModel().load_data(os.path.join(file_manager.data_directory, filename))
assert len(df) > 20, f"The input time series length ({len(df)}) is too small."
if file_mode == "single":
n = int(int(train_percentage) * len(df) / 100)
train_df, test_df = df.iloc[:n], df.iloc[n:]
else:
assert test_filename, "The test file is empty!"
test_df = ForecastModel().load_data(os.path.join(file_manager.data_directory, test_filename))
train_df = df
params = ForecastModel.parse_parameters(
param_info=ForecastModel.get_parameter_info(algorithm),
params={p["Parameter"]: p["Value"] for p in table["props"]["data"] if p["Parameter"]},
)
model, train_metrics, test_metrics, figure = ForecastModel().train(
algorithm, train_df, test_df, target_col, feature_cols, exog_cols, params, set_progress
)
ForecastModel.save_model(file_manager.model_directory, model, algorithm)
train_metric_table = create_metric_table(train_metrics)
test_metric_table = create_metric_table(test_metrics)
figure = dcc.Graph(figure=figure)
except Exception:
error = traceback.format_exc()
modal_is_open = True
modal_content = error
logger.error(error)
return train_metric_table, test_metric_table, figure, modal_is_open, modal_content | null |
272 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, dcc, callback
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.forecast import ForecastModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
def set_file_mode(value):
if value == "single":
return True, False
else:
return False, True | null |
273 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
file_manager = FileManager()
def update_select_file_dropdown(n_clicks, filename, features, label):
options = []
ctx = dash.callback_context
if ctx.triggered:
prop_ids = {p["prop_id"].split(".")[0]: p["value"] for p in ctx.triggered}
if "anomaly-select-file-parent" in prop_ids:
files = file_manager.uploaded_files()
for f in files:
options.append({"label": f, "value": f})
if "anomaly-select-file" in prop_ids:
features, label = None, None
return options, features, label | null |
274 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
file_manager = FileManager()
def update_select_test_file_dropdown(n_clicks):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "anomaly-select-test-file-parent":
files = file_manager.uploaded_files()
for filename in files:
options.append({"label": filename, "value": filename})
return options | null |
275 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
file_manager = FileManager()
class AnomalyModel(ModelMixin, DataMixin):
def __init__(self):
def get_available_algorithms(num_input_metrics):
def get_available_thresholds():
def get_threshold_info(threshold):
def _compute_metrics(labels, predictions):
def _plot_anomalies(model, ts, scores, labels=None):
def _check(df, columns, label_column, is_train):
def train(self, algorithm, train_df, test_df, columns, label_column, params, threshold_params, set_progress):
def test(self, model, df, columns, label_column, threshold_params, set_progress):
def select_features(n_clicks, train_file, test_file, label_name):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "anomaly-select-features-parent":
file_path = None
if train_file:
file_path = os.path.join(file_manager.data_directory, train_file)
elif test_file:
file_path = os.path.join(file_manager.data_directory, test_file)
if file_path:
df = AnomalyModel().load_data(file_path, nrows=2)
options += [{"label": s, "value": s} for s in df.columns if s != label_name]
return options | null |
276 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
file_manager = FileManager()
class AnomalyModel(ModelMixin, DataMixin):
univariate_algorithms = [
"DefaultDetector",
"ArimaDetector",
"DynamicBaseline",
"IsolationForest",
"ETSDetector",
"MSESDetector",
"ProphetDetector",
"RandomCutForest",
"SarimaDetector",
"WindStats",
"SpectralResidual",
"ZMS",
"DeepPointAnomalyDetector",
]
multivariate_algorithms = ["IsolationForest", "AutoEncoder", "VAE", "DAGMM", "LSTMED"]
thresholds = ["Threshold", "AggregateAlarms"]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms(num_input_metrics):
if num_input_metrics <= 0:
return []
elif num_input_metrics == 1:
return AnomalyModel.univariate_algorithms
else:
return AnomalyModel.multivariate_algorithms
def get_available_thresholds():
return AnomalyModel.thresholds
def get_threshold_info(threshold):
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, threshold)
param_info = AnomalyModel._param_info(model_class.__init__)
if not param_info["alm_threshold"]["default"]:
param_info["alm_threshold"]["default"] = 3.0
return param_info
def _compute_metrics(labels, predictions):
metrics = {}
for metric in [TSADMetric.Precision, TSADMetric.Recall, TSADMetric.F1, TSADMetric.MeanTimeToDetect]:
m = metric.value(ground_truth=labels, predict=predictions)
metrics[metric.name] = round(m, 5) if metric.name != "MeanTimeToDetect" else str(m)
return metrics
def _plot_anomalies(model, ts, scores, labels=None):
title = f"{type(model).__name__}: Anomalies in Time Series"
fig = MTSFigure(y=ts, y_prev=None, anom=scores)
return plot_anoms_plotly(fig=fig.plot_plotly(title=title), anomaly_labels=labels)
def _check(df, columns, label_column, is_train):
kind = "train" if is_train else "test"
if label_column and label_column not in df:
label_column = int(label_column)
assert label_column in df, f"The label column {label_column} is not in the {kind} time series."
for i in range(len(columns)):
if columns[i] not in df:
columns[i] = int(columns[i])
assert columns[i] in df, f"The variable {columns[i]} is not in the time {kind} series."
return columns, label_column
def train(self, algorithm, train_df, test_df, columns, label_column, params, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(train_df, columns, label_column, is_train=True)
columns, label_column = AnomalyModel._check(test_df, columns, label_column, is_train=False)
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
params["threshold"] = model_class(**thres_params)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
train_ts, train_labels = TimeSeries.from_pd(train_df[columns]), None
test_ts, test_labels = TimeSeries.from_pd(test_df[columns]), None
if label_column is not None and label_column != "":
train_labels = TimeSeries.from_pd(train_df[label_column])
test_labels = TimeSeries.from_pd(test_df[label_column])
self.logger.info(f"Training the anomaly detector: {algorithm}...")
set_progress(("2", "10"))
scores = model.train(train_data=train_ts)
set_progress(("6", "10"))
self.logger.info("Computing training performance metrics...")
train_pred = model.post_rule(scores) if model.post_rule is not None else scores
train_metrics = AnomalyModel._compute_metrics(train_labels, train_pred) if train_labels is not None else None
set_progress(("7", "10"))
self.logger.info("Getting test-time results...")
test_pred = model.get_anomaly_label(test_ts)
test_metrics = AnomalyModel._compute_metrics(test_labels, test_pred) if test_labels is not None else None
set_progress(("9", "10"))
self.logger.info("Plotting anomaly scores...")
figure = AnomalyModel._plot_anomalies(model, test_ts, test_pred, test_labels)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def test(self, model, df, columns, label_column, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(df, columns, label_column, is_train=False)
threshold = None
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
threshold = model_class(**thres_params)
if threshold is not None:
model.threshold = threshold
self.logger.info(f"Detecting anomalies...")
set_progress(("2", "10"))
test_ts, label_ts = TimeSeries.from_pd(df[columns]), None
if label_column is not None and label_column != "":
label_ts = TimeSeries.from_pd(df[[label_column]])
predictions = model.get_anomaly_label(time_series=test_ts)
set_progress(("7", "10"))
self.logger.info("Computing test performance metrics...")
metrics = AnomalyModel._compute_metrics(label_ts, predictions) if label_ts is not None else None
set_progress(("8", "10"))
self.logger.info("Plotting anomaly labels...")
figure = AnomalyModel._plot_anomalies(model, test_ts, predictions, label_ts)
self.logger.info("Finished.")
set_progress(("10", "10"))
return metrics, figure
def select_label(n_clicks, train_file, test_file, features):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "anomaly-select-label-parent":
file_path = None
if train_file:
file_path = os.path.join(file_manager.data_directory, train_file)
elif test_file:
file_path = os.path.join(file_manager.data_directory, test_file)
if file_path:
df = AnomalyModel().load_data(file_path, nrows=2)
options += [{"label": s, "value": s} for s in df.columns if s not in (features or [])]
return options | null |
277 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
class AnomalyModel(ModelMixin, DataMixin):
univariate_algorithms = [
"DefaultDetector",
"ArimaDetector",
"DynamicBaseline",
"IsolationForest",
"ETSDetector",
"MSESDetector",
"ProphetDetector",
"RandomCutForest",
"SarimaDetector",
"WindStats",
"SpectralResidual",
"ZMS",
"DeepPointAnomalyDetector",
]
multivariate_algorithms = ["IsolationForest", "AutoEncoder", "VAE", "DAGMM", "LSTMED"]
thresholds = ["Threshold", "AggregateAlarms"]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms(num_input_metrics):
if num_input_metrics <= 0:
return []
elif num_input_metrics == 1:
return AnomalyModel.univariate_algorithms
else:
return AnomalyModel.multivariate_algorithms
def get_available_thresholds():
return AnomalyModel.thresholds
def get_threshold_info(threshold):
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, threshold)
param_info = AnomalyModel._param_info(model_class.__init__)
if not param_info["alm_threshold"]["default"]:
param_info["alm_threshold"]["default"] = 3.0
return param_info
def _compute_metrics(labels, predictions):
metrics = {}
for metric in [TSADMetric.Precision, TSADMetric.Recall, TSADMetric.F1, TSADMetric.MeanTimeToDetect]:
m = metric.value(ground_truth=labels, predict=predictions)
metrics[metric.name] = round(m, 5) if metric.name != "MeanTimeToDetect" else str(m)
return metrics
def _plot_anomalies(model, ts, scores, labels=None):
title = f"{type(model).__name__}: Anomalies in Time Series"
fig = MTSFigure(y=ts, y_prev=None, anom=scores)
return plot_anoms_plotly(fig=fig.plot_plotly(title=title), anomaly_labels=labels)
def _check(df, columns, label_column, is_train):
kind = "train" if is_train else "test"
if label_column and label_column not in df:
label_column = int(label_column)
assert label_column in df, f"The label column {label_column} is not in the {kind} time series."
for i in range(len(columns)):
if columns[i] not in df:
columns[i] = int(columns[i])
assert columns[i] in df, f"The variable {columns[i]} is not in the time {kind} series."
return columns, label_column
def train(self, algorithm, train_df, test_df, columns, label_column, params, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(train_df, columns, label_column, is_train=True)
columns, label_column = AnomalyModel._check(test_df, columns, label_column, is_train=False)
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
params["threshold"] = model_class(**thres_params)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
train_ts, train_labels = TimeSeries.from_pd(train_df[columns]), None
test_ts, test_labels = TimeSeries.from_pd(test_df[columns]), None
if label_column is not None and label_column != "":
train_labels = TimeSeries.from_pd(train_df[label_column])
test_labels = TimeSeries.from_pd(test_df[label_column])
self.logger.info(f"Training the anomaly detector: {algorithm}...")
set_progress(("2", "10"))
scores = model.train(train_data=train_ts)
set_progress(("6", "10"))
self.logger.info("Computing training performance metrics...")
train_pred = model.post_rule(scores) if model.post_rule is not None else scores
train_metrics = AnomalyModel._compute_metrics(train_labels, train_pred) if train_labels is not None else None
set_progress(("7", "10"))
self.logger.info("Getting test-time results...")
test_pred = model.get_anomaly_label(test_ts)
test_metrics = AnomalyModel._compute_metrics(test_labels, test_pred) if test_labels is not None else None
set_progress(("9", "10"))
self.logger.info("Plotting anomaly scores...")
figure = AnomalyModel._plot_anomalies(model, test_ts, test_pred, test_labels)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def test(self, model, df, columns, label_column, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(df, columns, label_column, is_train=False)
threshold = None
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
threshold = model_class(**thres_params)
if threshold is not None:
model.threshold = threshold
self.logger.info(f"Detecting anomalies...")
set_progress(("2", "10"))
test_ts, label_ts = TimeSeries.from_pd(df[columns]), None
if label_column is not None and label_column != "":
label_ts = TimeSeries.from_pd(df[[label_column]])
predictions = model.get_anomaly_label(time_series=test_ts)
set_progress(("7", "10"))
self.logger.info("Computing test performance metrics...")
metrics = AnomalyModel._compute_metrics(label_ts, predictions) if label_ts is not None else None
set_progress(("8", "10"))
self.logger.info("Plotting anomaly labels...")
figure = AnomalyModel._plot_anomalies(model, test_ts, predictions, label_ts)
self.logger.info("Finished.")
set_progress(("10", "10"))
return metrics, figure
def select_algorithm_parent(n_clicks, selected_metrics):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "anomaly-select-algorithm-parent":
algorithms = AnomalyModel.get_available_algorithms(len(selected_metrics))
options += [{"label": s, "value": s} for s in algorithms]
return options | null |
278 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
class AnomalyModel(ModelMixin, DataMixin):
univariate_algorithms = [
"DefaultDetector",
"ArimaDetector",
"DynamicBaseline",
"IsolationForest",
"ETSDetector",
"MSESDetector",
"ProphetDetector",
"RandomCutForest",
"SarimaDetector",
"WindStats",
"SpectralResidual",
"ZMS",
"DeepPointAnomalyDetector",
]
multivariate_algorithms = ["IsolationForest", "AutoEncoder", "VAE", "DAGMM", "LSTMED"]
thresholds = ["Threshold", "AggregateAlarms"]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms(num_input_metrics):
if num_input_metrics <= 0:
return []
elif num_input_metrics == 1:
return AnomalyModel.univariate_algorithms
else:
return AnomalyModel.multivariate_algorithms
def get_available_thresholds():
return AnomalyModel.thresholds
def get_threshold_info(threshold):
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, threshold)
param_info = AnomalyModel._param_info(model_class.__init__)
if not param_info["alm_threshold"]["default"]:
param_info["alm_threshold"]["default"] = 3.0
return param_info
def _compute_metrics(labels, predictions):
metrics = {}
for metric in [TSADMetric.Precision, TSADMetric.Recall, TSADMetric.F1, TSADMetric.MeanTimeToDetect]:
m = metric.value(ground_truth=labels, predict=predictions)
metrics[metric.name] = round(m, 5) if metric.name != "MeanTimeToDetect" else str(m)
return metrics
def _plot_anomalies(model, ts, scores, labels=None):
title = f"{type(model).__name__}: Anomalies in Time Series"
fig = MTSFigure(y=ts, y_prev=None, anom=scores)
return plot_anoms_plotly(fig=fig.plot_plotly(title=title), anomaly_labels=labels)
def _check(df, columns, label_column, is_train):
kind = "train" if is_train else "test"
if label_column and label_column not in df:
label_column = int(label_column)
assert label_column in df, f"The label column {label_column} is not in the {kind} time series."
for i in range(len(columns)):
if columns[i] not in df:
columns[i] = int(columns[i])
assert columns[i] in df, f"The variable {columns[i]} is not in the time {kind} series."
return columns, label_column
def train(self, algorithm, train_df, test_df, columns, label_column, params, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(train_df, columns, label_column, is_train=True)
columns, label_column = AnomalyModel._check(test_df, columns, label_column, is_train=False)
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
params["threshold"] = model_class(**thres_params)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
train_ts, train_labels = TimeSeries.from_pd(train_df[columns]), None
test_ts, test_labels = TimeSeries.from_pd(test_df[columns]), None
if label_column is not None and label_column != "":
train_labels = TimeSeries.from_pd(train_df[label_column])
test_labels = TimeSeries.from_pd(test_df[label_column])
self.logger.info(f"Training the anomaly detector: {algorithm}...")
set_progress(("2", "10"))
scores = model.train(train_data=train_ts)
set_progress(("6", "10"))
self.logger.info("Computing training performance metrics...")
train_pred = model.post_rule(scores) if model.post_rule is not None else scores
train_metrics = AnomalyModel._compute_metrics(train_labels, train_pred) if train_labels is not None else None
set_progress(("7", "10"))
self.logger.info("Getting test-time results...")
test_pred = model.get_anomaly_label(test_ts)
test_metrics = AnomalyModel._compute_metrics(test_labels, test_pred) if test_labels is not None else None
set_progress(("9", "10"))
self.logger.info("Plotting anomaly scores...")
figure = AnomalyModel._plot_anomalies(model, test_ts, test_pred, test_labels)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def test(self, model, df, columns, label_column, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(df, columns, label_column, is_train=False)
threshold = None
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
threshold = model_class(**thres_params)
if threshold is not None:
model.threshold = threshold
self.logger.info(f"Detecting anomalies...")
set_progress(("2", "10"))
test_ts, label_ts = TimeSeries.from_pd(df[columns]), None
if label_column is not None and label_column != "":
label_ts = TimeSeries.from_pd(df[[label_column]])
predictions = model.get_anomaly_label(time_series=test_ts)
set_progress(("7", "10"))
self.logger.info("Computing test performance metrics...")
metrics = AnomalyModel._compute_metrics(label_ts, predictions) if label_ts is not None else None
set_progress(("8", "10"))
self.logger.info("Plotting anomaly labels...")
figure = AnomalyModel._plot_anomalies(model, test_ts, predictions, label_ts)
self.logger.info("Finished.")
set_progress(("10", "10"))
return metrics, figure
def create_param_table(params=None, height=100):
if params is None or len(params) == 0:
data = [{"Parameter": "", "Value": ""}]
else:
data = [{"Parameter": key, "Value": str(value["default"])} for key, value in params.items()]
table = dash_table.DataTable(
data=data,
columns=[{"id": "Parameter", "name": "Parameter"}, {"id": "Value", "name": "Value"}],
editable=True,
style_header_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_cell_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_table={"overflowX": "scroll", "overflowY": "scroll", "height": height},
style_header=dict(backgroundColor=TABLE_HEADER_COLOR, color="white"),
style_data=dict(backgroundColor=TABLE_DATA_COLOR),
)
return table
def select_algorithm(algorithm):
param_table = create_param_table()
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "anomaly-select-algorithm":
param_info = AnomalyModel.get_parameter_info(algorithm)
param_table = create_param_table(param_info)
return param_table | null |
279 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
class AnomalyModel(ModelMixin, DataMixin):
univariate_algorithms = [
"DefaultDetector",
"ArimaDetector",
"DynamicBaseline",
"IsolationForest",
"ETSDetector",
"MSESDetector",
"ProphetDetector",
"RandomCutForest",
"SarimaDetector",
"WindStats",
"SpectralResidual",
"ZMS",
"DeepPointAnomalyDetector",
]
multivariate_algorithms = ["IsolationForest", "AutoEncoder", "VAE", "DAGMM", "LSTMED"]
thresholds = ["Threshold", "AggregateAlarms"]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms(num_input_metrics):
if num_input_metrics <= 0:
return []
elif num_input_metrics == 1:
return AnomalyModel.univariate_algorithms
else:
return AnomalyModel.multivariate_algorithms
def get_available_thresholds():
return AnomalyModel.thresholds
def get_threshold_info(threshold):
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, threshold)
param_info = AnomalyModel._param_info(model_class.__init__)
if not param_info["alm_threshold"]["default"]:
param_info["alm_threshold"]["default"] = 3.0
return param_info
def _compute_metrics(labels, predictions):
metrics = {}
for metric in [TSADMetric.Precision, TSADMetric.Recall, TSADMetric.F1, TSADMetric.MeanTimeToDetect]:
m = metric.value(ground_truth=labels, predict=predictions)
metrics[metric.name] = round(m, 5) if metric.name != "MeanTimeToDetect" else str(m)
return metrics
def _plot_anomalies(model, ts, scores, labels=None):
title = f"{type(model).__name__}: Anomalies in Time Series"
fig = MTSFigure(y=ts, y_prev=None, anom=scores)
return plot_anoms_plotly(fig=fig.plot_plotly(title=title), anomaly_labels=labels)
def _check(df, columns, label_column, is_train):
kind = "train" if is_train else "test"
if label_column and label_column not in df:
label_column = int(label_column)
assert label_column in df, f"The label column {label_column} is not in the {kind} time series."
for i in range(len(columns)):
if columns[i] not in df:
columns[i] = int(columns[i])
assert columns[i] in df, f"The variable {columns[i]} is not in the time {kind} series."
return columns, label_column
def train(self, algorithm, train_df, test_df, columns, label_column, params, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(train_df, columns, label_column, is_train=True)
columns, label_column = AnomalyModel._check(test_df, columns, label_column, is_train=False)
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
params["threshold"] = model_class(**thres_params)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
train_ts, train_labels = TimeSeries.from_pd(train_df[columns]), None
test_ts, test_labels = TimeSeries.from_pd(test_df[columns]), None
if label_column is not None and label_column != "":
train_labels = TimeSeries.from_pd(train_df[label_column])
test_labels = TimeSeries.from_pd(test_df[label_column])
self.logger.info(f"Training the anomaly detector: {algorithm}...")
set_progress(("2", "10"))
scores = model.train(train_data=train_ts)
set_progress(("6", "10"))
self.logger.info("Computing training performance metrics...")
train_pred = model.post_rule(scores) if model.post_rule is not None else scores
train_metrics = AnomalyModel._compute_metrics(train_labels, train_pred) if train_labels is not None else None
set_progress(("7", "10"))
self.logger.info("Getting test-time results...")
test_pred = model.get_anomaly_label(test_ts)
test_metrics = AnomalyModel._compute_metrics(test_labels, test_pred) if test_labels is not None else None
set_progress(("9", "10"))
self.logger.info("Plotting anomaly scores...")
figure = AnomalyModel._plot_anomalies(model, test_ts, test_pred, test_labels)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def test(self, model, df, columns, label_column, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(df, columns, label_column, is_train=False)
threshold = None
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
threshold = model_class(**thres_params)
if threshold is not None:
model.threshold = threshold
self.logger.info(f"Detecting anomalies...")
set_progress(("2", "10"))
test_ts, label_ts = TimeSeries.from_pd(df[columns]), None
if label_column is not None and label_column != "":
label_ts = TimeSeries.from_pd(df[[label_column]])
predictions = model.get_anomaly_label(time_series=test_ts)
set_progress(("7", "10"))
self.logger.info("Computing test performance metrics...")
metrics = AnomalyModel._compute_metrics(label_ts, predictions) if label_ts is not None else None
set_progress(("8", "10"))
self.logger.info("Plotting anomaly labels...")
figure = AnomalyModel._plot_anomalies(model, test_ts, predictions, label_ts)
self.logger.info("Finished.")
set_progress(("10", "10"))
return metrics, figure
def select_threshold_parent(n_clicks):
options = []
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "anomaly-select-threshold-parent":
algorithms = AnomalyModel.get_available_thresholds()
options += [{"label": s, "value": s} for s in algorithms]
return options | null |
280 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
class AnomalyModel(ModelMixin, DataMixin):
univariate_algorithms = [
"DefaultDetector",
"ArimaDetector",
"DynamicBaseline",
"IsolationForest",
"ETSDetector",
"MSESDetector",
"ProphetDetector",
"RandomCutForest",
"SarimaDetector",
"WindStats",
"SpectralResidual",
"ZMS",
"DeepPointAnomalyDetector",
]
multivariate_algorithms = ["IsolationForest", "AutoEncoder", "VAE", "DAGMM", "LSTMED"]
thresholds = ["Threshold", "AggregateAlarms"]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms(num_input_metrics):
if num_input_metrics <= 0:
return []
elif num_input_metrics == 1:
return AnomalyModel.univariate_algorithms
else:
return AnomalyModel.multivariate_algorithms
def get_available_thresholds():
return AnomalyModel.thresholds
def get_threshold_info(threshold):
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, threshold)
param_info = AnomalyModel._param_info(model_class.__init__)
if not param_info["alm_threshold"]["default"]:
param_info["alm_threshold"]["default"] = 3.0
return param_info
def _compute_metrics(labels, predictions):
metrics = {}
for metric in [TSADMetric.Precision, TSADMetric.Recall, TSADMetric.F1, TSADMetric.MeanTimeToDetect]:
m = metric.value(ground_truth=labels, predict=predictions)
metrics[metric.name] = round(m, 5) if metric.name != "MeanTimeToDetect" else str(m)
return metrics
def _plot_anomalies(model, ts, scores, labels=None):
title = f"{type(model).__name__}: Anomalies in Time Series"
fig = MTSFigure(y=ts, y_prev=None, anom=scores)
return plot_anoms_plotly(fig=fig.plot_plotly(title=title), anomaly_labels=labels)
def _check(df, columns, label_column, is_train):
kind = "train" if is_train else "test"
if label_column and label_column not in df:
label_column = int(label_column)
assert label_column in df, f"The label column {label_column} is not in the {kind} time series."
for i in range(len(columns)):
if columns[i] not in df:
columns[i] = int(columns[i])
assert columns[i] in df, f"The variable {columns[i]} is not in the time {kind} series."
return columns, label_column
def train(self, algorithm, train_df, test_df, columns, label_column, params, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(train_df, columns, label_column, is_train=True)
columns, label_column = AnomalyModel._check(test_df, columns, label_column, is_train=False)
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
params["threshold"] = model_class(**thres_params)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
train_ts, train_labels = TimeSeries.from_pd(train_df[columns]), None
test_ts, test_labels = TimeSeries.from_pd(test_df[columns]), None
if label_column is not None and label_column != "":
train_labels = TimeSeries.from_pd(train_df[label_column])
test_labels = TimeSeries.from_pd(test_df[label_column])
self.logger.info(f"Training the anomaly detector: {algorithm}...")
set_progress(("2", "10"))
scores = model.train(train_data=train_ts)
set_progress(("6", "10"))
self.logger.info("Computing training performance metrics...")
train_pred = model.post_rule(scores) if model.post_rule is not None else scores
train_metrics = AnomalyModel._compute_metrics(train_labels, train_pred) if train_labels is not None else None
set_progress(("7", "10"))
self.logger.info("Getting test-time results...")
test_pred = model.get_anomaly_label(test_ts)
test_metrics = AnomalyModel._compute_metrics(test_labels, test_pred) if test_labels is not None else None
set_progress(("9", "10"))
self.logger.info("Plotting anomaly scores...")
figure = AnomalyModel._plot_anomalies(model, test_ts, test_pred, test_labels)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def test(self, model, df, columns, label_column, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(df, columns, label_column, is_train=False)
threshold = None
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
threshold = model_class(**thres_params)
if threshold is not None:
model.threshold = threshold
self.logger.info(f"Detecting anomalies...")
set_progress(("2", "10"))
test_ts, label_ts = TimeSeries.from_pd(df[columns]), None
if label_column is not None and label_column != "":
label_ts = TimeSeries.from_pd(df[[label_column]])
predictions = model.get_anomaly_label(time_series=test_ts)
set_progress(("7", "10"))
self.logger.info("Computing test performance metrics...")
metrics = AnomalyModel._compute_metrics(label_ts, predictions) if label_ts is not None else None
set_progress(("8", "10"))
self.logger.info("Plotting anomaly labels...")
figure = AnomalyModel._plot_anomalies(model, test_ts, predictions, label_ts)
self.logger.info("Finished.")
set_progress(("10", "10"))
return metrics, figure
def create_param_table(params=None, height=100):
if params is None or len(params) == 0:
data = [{"Parameter": "", "Value": ""}]
else:
data = [{"Parameter": key, "Value": str(value["default"])} for key, value in params.items()]
table = dash_table.DataTable(
data=data,
columns=[{"id": "Parameter", "name": "Parameter"}, {"id": "Value", "name": "Value"}],
editable=True,
style_header_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_cell_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_table={"overflowX": "scroll", "overflowY": "scroll", "height": height},
style_header=dict(backgroundColor=TABLE_HEADER_COLOR, color="white"),
style_data=dict(backgroundColor=TABLE_DATA_COLOR),
)
return table
def select_threshold(threshold):
param_table = create_param_table(height=80)
ctx = dash.callback_context
prop_id = ctx.triggered_id
if prop_id == "anomaly-select-threshold" and threshold:
param_info = AnomalyModel.get_threshold_info(threshold)
param_table = create_param_table(param_info, height=80)
return param_table | null |
281 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
logger = logging.getLogger(__name__)
file_manager = FileManager()
class AnomalyModel(ModelMixin, DataMixin):
univariate_algorithms = [
"DefaultDetector",
"ArimaDetector",
"DynamicBaseline",
"IsolationForest",
"ETSDetector",
"MSESDetector",
"ProphetDetector",
"RandomCutForest",
"SarimaDetector",
"WindStats",
"SpectralResidual",
"ZMS",
"DeepPointAnomalyDetector",
]
multivariate_algorithms = ["IsolationForest", "AutoEncoder", "VAE", "DAGMM", "LSTMED"]
thresholds = ["Threshold", "AggregateAlarms"]
def __init__(self):
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
self.logger.addHandler(dash_logger)
def get_available_algorithms(num_input_metrics):
if num_input_metrics <= 0:
return []
elif num_input_metrics == 1:
return AnomalyModel.univariate_algorithms
else:
return AnomalyModel.multivariate_algorithms
def get_available_thresholds():
return AnomalyModel.thresholds
def get_threshold_info(threshold):
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, threshold)
param_info = AnomalyModel._param_info(model_class.__init__)
if not param_info["alm_threshold"]["default"]:
param_info["alm_threshold"]["default"] = 3.0
return param_info
def _compute_metrics(labels, predictions):
metrics = {}
for metric in [TSADMetric.Precision, TSADMetric.Recall, TSADMetric.F1, TSADMetric.MeanTimeToDetect]:
m = metric.value(ground_truth=labels, predict=predictions)
metrics[metric.name] = round(m, 5) if metric.name != "MeanTimeToDetect" else str(m)
return metrics
def _plot_anomalies(model, ts, scores, labels=None):
title = f"{type(model).__name__}: Anomalies in Time Series"
fig = MTSFigure(y=ts, y_prev=None, anom=scores)
return plot_anoms_plotly(fig=fig.plot_plotly(title=title), anomaly_labels=labels)
def _check(df, columns, label_column, is_train):
kind = "train" if is_train else "test"
if label_column and label_column not in df:
label_column = int(label_column)
assert label_column in df, f"The label column {label_column} is not in the {kind} time series."
for i in range(len(columns)):
if columns[i] not in df:
columns[i] = int(columns[i])
assert columns[i] in df, f"The variable {columns[i]} is not in the time {kind} series."
return columns, label_column
def train(self, algorithm, train_df, test_df, columns, label_column, params, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(train_df, columns, label_column, is_train=True)
columns, label_column = AnomalyModel._check(test_df, columns, label_column, is_train=False)
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
params["threshold"] = model_class(**thres_params)
model_class = ModelFactory.get_model_class(algorithm)
model = model_class(model_class.config_class(**params))
train_ts, train_labels = TimeSeries.from_pd(train_df[columns]), None
test_ts, test_labels = TimeSeries.from_pd(test_df[columns]), None
if label_column is not None and label_column != "":
train_labels = TimeSeries.from_pd(train_df[label_column])
test_labels = TimeSeries.from_pd(test_df[label_column])
self.logger.info(f"Training the anomaly detector: {algorithm}...")
set_progress(("2", "10"))
scores = model.train(train_data=train_ts)
set_progress(("6", "10"))
self.logger.info("Computing training performance metrics...")
train_pred = model.post_rule(scores) if model.post_rule is not None else scores
train_metrics = AnomalyModel._compute_metrics(train_labels, train_pred) if train_labels is not None else None
set_progress(("7", "10"))
self.logger.info("Getting test-time results...")
test_pred = model.get_anomaly_label(test_ts)
test_metrics = AnomalyModel._compute_metrics(test_labels, test_pred) if test_labels is not None else None
set_progress(("9", "10"))
self.logger.info("Plotting anomaly scores...")
figure = AnomalyModel._plot_anomalies(model, test_ts, test_pred, test_labels)
self.logger.info("Finished.")
set_progress(("10", "10"))
return model, train_metrics, test_metrics, figure
def test(self, model, df, columns, label_column, threshold_params, set_progress):
columns, label_column = AnomalyModel._check(df, columns, label_column, is_train=False)
threshold = None
if threshold_params is not None:
thres_class, thres_params = threshold_params
module = importlib.import_module("merlion.post_process.threshold")
model_class = getattr(module, thres_class)
threshold = model_class(**thres_params)
if threshold is not None:
model.threshold = threshold
self.logger.info(f"Detecting anomalies...")
set_progress(("2", "10"))
test_ts, label_ts = TimeSeries.from_pd(df[columns]), None
if label_column is not None and label_column != "":
label_ts = TimeSeries.from_pd(df[[label_column]])
predictions = model.get_anomaly_label(time_series=test_ts)
set_progress(("7", "10"))
self.logger.info("Computing test performance metrics...")
metrics = AnomalyModel._compute_metrics(label_ts, predictions) if label_ts is not None else None
set_progress(("8", "10"))
self.logger.info("Plotting anomaly labels...")
figure = AnomalyModel._plot_anomalies(model, test_ts, predictions, label_ts)
self.logger.info("Finished.")
set_progress(("10", "10"))
return metrics, figure
def create_metric_table(metrics=None):
if metrics is None or len(metrics) == 0:
data, columns = {}, []
for i in range(4):
data[f"Metric {i}"] = "-"
columns.append({"id": f"Metric {i}", "name": f"Metric {i}"})
else:
data = metrics
columns = [{"id": key, "name": key} for key in metrics.keys()]
if not isinstance(data, list):
data = [data]
table = dash_table.DataTable(
data=data,
columns=columns,
editable=False,
style_header_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_cell_conditional=[{"textAlign": "center", "font-family": "Salesforce Sans"}],
style_table={"overflowX": "scroll"},
style_header=dict(backgroundColor=TABLE_HEADER_COLOR, color="white"),
style_data=dict(backgroundColor=TABLE_DATA_COLOR),
)
return table
def create_empty_figure():
return plot_timeseries(pd.DataFrame(index=pd.DatetimeIndex([])))
def click_train_test(
set_progress,
train_clicks,
test_clicks,
modal_close,
train_filename,
test_filename,
columns,
algorithm,
label_column,
param_table,
threshold_class,
threshold_table,
train_percentage,
train_metrics,
file_mode,
):
ctx = dash.callback_context
modal_is_open = False
modal_content = ""
train_metric_table = create_metric_table()
test_metric_table = create_metric_table()
figure = create_empty_figure()
set_progress((str(0), str(10)))
try:
if ctx.triggered:
prop_id = ctx.triggered_id
if prop_id == "anomaly-train-btn" and train_clicks > 0:
assert train_filename, "The training file is empty!"
assert columns, "Please select variables/metrics for analysis."
assert algorithm, "Please select a anomaly detector to train."
df = AnomalyModel().load_data(os.path.join(file_manager.data_directory, train_filename))
if file_mode == "single":
n = int(int(train_percentage) * len(df) / 100)
train_df = df.iloc[:n]
test_df = df.iloc[n:]
else:
assert test_filename, "The test file is empty!"
train_df = df
test_df = AnomalyModel().load_data(os.path.join(file_manager.data_directory, test_filename))
alg_params = AnomalyModel.parse_parameters(
param_info=AnomalyModel.get_parameter_info(algorithm),
params={p["Parameter"]: p["Value"] for p in param_table["props"]["data"]},
)
if threshold_class:
threshold_params = (
threshold_class,
AnomalyModel.parse_parameters(
param_info=AnomalyModel.get_threshold_info(threshold_class),
params={p["Parameter"]: p["Value"] for p in threshold_table["props"]["data"]},
),
)
else:
threshold_params = None
model, train_metrics, test_metrics, figure = AnomalyModel().train(
algorithm, train_df, test_df, columns, label_column, alg_params, threshold_params, set_progress
)
AnomalyModel.save_model(file_manager.model_directory, model, algorithm)
if train_metrics is not None:
train_metric_table = create_metric_table(train_metrics)
if test_metrics is not None:
test_metric_table = create_metric_table(test_metrics)
figure = dcc.Graph(figure=figure)
elif prop_id == "anomaly-test-btn" and test_clicks > 0:
assert columns, "Please select variables/metrics for analysis."
assert algorithm, "Please select a trained anomaly detector."
if file_mode == "single":
df = AnomalyModel().load_data(os.path.join(file_manager.data_directory, train_filename))
n = int(int(train_percentage) * len(df) / 100)
df = df.iloc[n:]
else:
assert test_filename, "The test file is empty!"
df = AnomalyModel().load_data(os.path.join(file_manager.data_directory, test_filename))
model = AnomalyModel.load_model(file_manager.model_directory, algorithm)
if threshold_class:
threshold_params = (
threshold_class,
AnomalyModel.parse_parameters(
param_info=AnomalyModel.get_threshold_info(threshold_class),
params={p["Parameter"]: p["Value"] for p in threshold_table["props"]["data"]},
),
)
else:
threshold_params = None
train_metrics = train_metrics[0] if isinstance(train_metrics, list) else train_metrics
train_metric_table = create_metric_table(train_metrics["props"]["data"][0])
metrics, figure = AnomalyModel().test(model, df, columns, label_column, threshold_params, set_progress)
if metrics is not None:
test_metric_table = create_metric_table(metrics)
figure = dcc.Graph(figure=figure)
except Exception:
error = traceback.format_exc()
modal_is_open = True
modal_content = error
logger.error(error)
return train_metric_table, test_metric_table, figure, modal_is_open, modal_content | null |
282 | import logging
import os
import traceback
import dash
from dash import Input, Output, State, callback, dcc
from merlion.dashboard.utils.file_manager import FileManager
from merlion.dashboard.models.anomaly import AnomalyModel
from merlion.dashboard.pages.utils import create_param_table, create_metric_table, create_empty_figure
def set_file_mode(value):
if value == "single":
return True, False
else:
return False, True | null |
283 | import dash
import dash_bootstrap_components as dbc
from dash import dcc
from dash import html
from dash.dependencies import Input, Output, State
import logging
from merlion.dashboard.utils.layout import create_banner, create_layout
from merlion.dashboard.pages.data import create_data_layout
from merlion.dashboard.pages.forecast import create_forecasting_layout
from merlion.dashboard.pages.anomaly import create_anomaly_layout
from merlion.dashboard.callbacks import data
from merlion.dashboard.callbacks import forecast
from merlion.dashboard.callbacks import anomaly
app = dash.Dash(
__name__,
meta_tags=[{"name": "viewport", "content": "width=device-width, initial-scale=1"}],
external_stylesheets=[dbc.themes.BOOTSTRAP],
title="Merlion Dashboard",
)
app.config["suppress_callback_exceptions"] = True
app.layout = html.Div(
[
dcc.Location(id="url", refresh=False),
html.Div(id="page-content"),
dcc.Store(id="data-state"),
dcc.Store(id="anomaly-state"),
dcc.Store(id="forecasting-state"),
]
)
def create_banner(app):
return html.Div(
id="banner",
className="banner",
children=[
html.Img(src=app.get_asset_url("merlion_small.svg")),
html.Plaintext(" Powered by Salesforce AI Research"),
],
)
def create_layout() -> html.Div:
children, values = [], []
# Data analysis tab
children.append(
dcc.Tab(label="File Manager", value="file-manager", style=tab_style, selected_style=tab_selected_style)
)
values.append("file-manager")
# Anomaly detection tab
children.append(
dcc.Tab(label="Anomaly Detection", value="anomaly", style=tab_style, selected_style=tab_selected_style)
)
values.append("anomaly")
# Forecasting tab
children.append(
dcc.Tab(label="Forecasting", value="forecasting", style=tab_style, selected_style=tab_selected_style)
)
values.append("forecasting")
layout = html.Div(
id="app-content",
children=[dcc.Tabs(id="tabs", value=values[0] if values else "none", children=children), html.Div(id="plots")],
)
return layout
def _display_page(pathname):
return html.Div(id="app-container", children=[create_banner(app), html.Br(), create_layout()]) | null |
284 | import dash
import dash_bootstrap_components as dbc
from dash import dcc
from dash import html
from dash.dependencies import Input, Output, State
import logging
from merlion.dashboard.utils.layout import create_banner, create_layout
from merlion.dashboard.pages.data import create_data_layout
from merlion.dashboard.pages.forecast import create_forecasting_layout
from merlion.dashboard.pages.anomaly import create_anomaly_layout
from merlion.dashboard.callbacks import data
from merlion.dashboard.callbacks import forecast
from merlion.dashboard.callbacks import anomaly
def create_data_layout() -> html.Div:
return html.Div(
id="data_views",
children=[
# Left column
html.Div(id="left-column-data", className="three columns", children=[create_control_panel()]),
# Right column
html.Div(className="nine columns", children=create_right_column()),
],
)
def create_forecasting_layout() -> html.Div:
return html.Div(
id="forecasting_views",
children=[
# Left column
html.Div(id="left-column-data", className="three columns", children=[create_control_panel()]),
# Right column
html.Div(className="nine columns", children=create_right_column()),
],
)
def create_anomaly_layout() -> html.Div:
return html.Div(
id="anomaly_views",
children=[
# Left column
html.Div(id="left-column-data", className="three columns", children=[create_control_panel()]),
# Right column
html.Div(className="nine columns", children=create_right_column()),
],
)
def _click_tab(tab, data_state, anomaly_state, forecasting_state):
if tab == "file-manager":
return create_data_layout()
elif tab == "forecasting":
return create_forecasting_layout()
elif tab == "anomaly":
return create_anomaly_layout() | null |
285 | import plotly
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from dash import dash_table, dcc
from merlion.dashboard.settings import *
def data_table(df, n=1000, page_size=10):
if df is not None:
df = df.head(n)
columns = [{"name": "Index", "id": "Index"}] + [{"name": c, "id": c} for c in df.columns]
data = []
for i in range(df.shape[0]):
d = {c: v for c, v in zip(df.columns, df.values[i])}
d.update({"Index": df.index[i]})
data.append(d)
table = dash_table.DataTable(
id="table",
columns=columns,
data=data,
style_cell_conditional=[{"textAlign": "center"}],
style_table={"overflowX": "scroll"},
editable=False,
column_selectable="single",
page_action="native",
page_size=page_size,
page_current=0,
style_header=dict(backgroundColor=TABLE_HEADER_COLOR),
style_data=dict(backgroundColor=TABLE_DATA_COLOR),
)
return table
else:
return dash_table.DataTable() | null |
286 | from abc import abstractmethod
from collections import OrderedDict
import copy
import logging
from typing import List, Optional, Union
import numpy as np
from merlion.evaluate.anomaly import TSADMetric
from merlion.evaluate.forecast import ForecastMetric
from merlion.utils import UnivariateTimeSeries, TimeSeries
from merlion.utils.misc import AutodocABCMeta
The provided code snippet includes necessary dependencies for implementing the `_align_outputs` function. Write a Python function `def _align_outputs(all_model_outs: List[TimeSeries], target: TimeSeries) -> List[Optional[TimeSeries]]` to solve the following problem:
Aligns the outputs of each model to the time series ``target``.
Here is the function:
def _align_outputs(all_model_outs: List[TimeSeries], target: TimeSeries) -> List[Optional[TimeSeries]]:
"""
Aligns the outputs of each model to the time series ``target``.
"""
if all(out is None for out in all_model_outs):
return [None for _ in all_model_outs]
if target is None:
time_stamps = np.unique(np.concatenate([out.to_pd().index for out in all_model_outs if out is not None]))
else:
t0 = min(min(v.index[0] for v in out.univariates) for out in all_model_outs if out is not None)
tf = max(max(v.index[-1] for v in out.univariates) for out in all_model_outs if out is not None)
time_stamps = target.to_pd()[t0:tf].index
return [None if out is None else out.align(reference=time_stamps) for out in all_model_outs] | Aligns the outputs of each model to the time series ``target``. |
287 | import copy
import inspect
import logging
from typing import Any, Dict, List, Union
import pandas as pd
from merlion.models.base import Config, ModelBase
from merlion.models.factory import ModelFactory
from merlion.models.anomaly.base import DetectorBase, DetectorConfig
from merlion.models.forecast.base import ForecasterBase, ForecasterConfig, ForecasterExogBase, ForecasterExogConfig
from merlion.models.anomaly.forecast_based.base import ForecastingDetectorBase
from merlion.transform.base import Identity
from merlion.transform.resample import TemporalResample
from merlion.transform.sequence import TransformSequence
from merlion.utils import TimeSeries
from merlion.utils.misc import AutodocABCMeta, call_with_accepted_kwargs
_DETECTOR_MEMBERS = dict(inspect.getmembers(DetectorConfig)).keys()
class DetectorBase(ModelBase):
def __init__(self, config: DetectorConfig):
def _default_post_rule_train_config(self):
def threshold(self):
def threshold(self, threshold):
def calibrator(self):
def post_rule(self):
def train(
self, train_data: TimeSeries, train_config=None, anomaly_labels: TimeSeries = None, post_rule_train_config=None
) -> TimeSeries:
def train_post_process(
self, train_result: Union[TimeSeries, pd.DataFrame], anomaly_labels=None, post_rule_train_config=None
) -> TimeSeries:
def _train(self, train_data: pd.DataFrame, train_config=None) -> pd.DataFrame:
def _get_anomaly_score(self, time_series: pd.DataFrame, time_series_prev: pd.DataFrame = None) -> pd.DataFrame:
def get_anomaly_score(self, time_series: TimeSeries, time_series_prev: TimeSeries = None) -> TimeSeries:
def get_anomaly_label(self, time_series: TimeSeries, time_series_prev: TimeSeries = None) -> TimeSeries:
def get_figure(
self,
time_series: TimeSeries,
time_series_prev: TimeSeries = None,
*,
filter_scores=True,
plot_time_series_prev=False,
fig: Figure = None,
**kwargs,
) -> Figure:
def plot_anomaly(
self,
time_series: TimeSeries,
time_series_prev: TimeSeries = None,
*,
filter_scores=True,
plot_time_series_prev=False,
figsize=(1000, 600),
ax=None,
):
def plot_anomaly_plotly(
self,
time_series: TimeSeries,
time_series_prev: TimeSeries = None,
*,
filter_scores=True,
plot_time_series_prev=False,
figsize=None,
):
def _is_detector_attr(base_model, attr):
return isinstance(base_model, DetectorBase) and attr in _DETECTOR_MEMBERS | null |
288 | import copy
import inspect
import logging
from typing import Any, Dict, List, Union
import pandas as pd
from merlion.models.base import Config, ModelBase
from merlion.models.factory import ModelFactory
from merlion.models.anomaly.base import DetectorBase, DetectorConfig
from merlion.models.forecast.base import ForecasterBase, ForecasterConfig, ForecasterExogBase, ForecasterExogConfig
from merlion.models.anomaly.forecast_based.base import ForecastingDetectorBase
from merlion.transform.base import Identity
from merlion.transform.resample import TemporalResample
from merlion.transform.sequence import TransformSequence
from merlion.utils import TimeSeries
from merlion.utils.misc import AutodocABCMeta, call_with_accepted_kwargs
_FORECASTER_MEMBERS = dict(inspect.getmembers(ForecasterConfig)).keys()
_FORECASTER_EXOG_MEMBERS = dict(inspect.getmembers(ForecasterExogConfig)).keys()
class ForecasterBase(ModelBase):
def __init__(self, config: ForecasterConfig):
def max_forecast_steps(self):
def target_seq_index(self) -> int:
def invert_transform(self):
def require_univariate(self) -> bool:
def support_multivariate_output(self) -> bool:
def resample_time_stamps(self, time_stamps: Union[int, List[int]], time_series_prev: TimeSeries = None):
def train_pre_process(
self, train_data: TimeSeries, exog_data: TimeSeries = None, return_exog=None
) -> Union[TimeSeries, Tuple[TimeSeries, Union[TimeSeries, None]]]:
def train(
self, train_data: TimeSeries, train_config=None, exog_data: TimeSeries = None
) -> Tuple[TimeSeries, Optional[TimeSeries]]:
def train_post_process(
self, train_result: Tuple[Union[TimeSeries, pd.DataFrame], Optional[Union[TimeSeries, pd.DataFrame]]]
) -> Tuple[TimeSeries, TimeSeries]:
def transform_exog_data(
self,
exog_data: TimeSeries,
time_stamps: Union[List[int], pd.DatetimeIndex],
time_series_prev: TimeSeries = None,
) -> Union[Tuple[TimeSeries, TimeSeries], Tuple[TimeSeries, None], Tuple[None, None]]:
def _train(self, train_data: pd.DataFrame, train_config=None) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
def _train_with_exog(
self, train_data: pd.DataFrame, train_config=None, exog_data: pd.DataFrame = None
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
def forecast(
self,
time_stamps: Union[int, List[int]],
time_series_prev: TimeSeries = None,
exog_data: TimeSeries = None,
return_iqr: bool = False,
return_prev: bool = False,
) -> Union[Tuple[TimeSeries, Optional[TimeSeries]], Tuple[TimeSeries, TimeSeries, TimeSeries]]:
def _process_forecast(self, forecast, err, time_series_prev=None, return_prev=False, return_iqr=False):
def _forecast(
self, time_stamps: List[int], time_series_prev: pd.DataFrame = None, return_prev=False
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
def _forecast_with_exog(
self,
time_stamps: List[int],
time_series_prev: pd.DataFrame = None,
return_prev=False,
exog_data: pd.DataFrame = None,
exog_data_prev: pd.DataFrame = None,
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
def batch_forecast(
self,
time_stamps_list: List[List[int]],
time_series_prev_list: List[TimeSeries],
return_iqr: bool = False,
return_prev: bool = False,
) -> Tuple[
Union[
Tuple[List[TimeSeries], List[Optional[TimeSeries]]],
Tuple[List[TimeSeries], List[TimeSeries], List[TimeSeries]],
]
]:
def get_figure(
self,
*,
time_series: TimeSeries = None,
time_stamps: List[int] = None,
time_series_prev: TimeSeries = None,
exog_data: TimeSeries = None,
plot_forecast_uncertainty=False,
plot_time_series_prev=False,
) -> Figure:
def plot_forecast(
self,
*,
time_series: TimeSeries = None,
time_stamps: List[int] = None,
time_series_prev: TimeSeries = None,
exog_data: TimeSeries = None,
plot_forecast_uncertainty=False,
plot_time_series_prev=False,
figsize=(1000, 600),
ax=None,
):
def plot_forecast_plotly(
self,
*,
time_series: TimeSeries = None,
time_stamps: List[int] = None,
time_series_prev: TimeSeries = None,
exog_data: TimeSeries = None,
plot_forecast_uncertainty=False,
plot_time_series_prev=False,
figsize=(1000, 600),
):
class ForecasterExogBase(ForecasterBase):
def supports_exog(self):
def exog_transform(self):
def exog_aggregation_policy(self):
def exog_missing_value_policy(self):
def transform_exog_data(
self,
exog_data: TimeSeries,
time_stamps: Union[List[int], pd.DatetimeIndex],
time_series_prev: TimeSeries = None,
) -> Union[Tuple[TimeSeries, TimeSeries], Tuple[TimeSeries, None], Tuple[None, None]]:
def _train_with_exog(
self, train_data: pd.DataFrame, train_config=None, exog_data: pd.DataFrame = None
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
def _train(self, train_data: pd.DataFrame, train_config=None) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
def _forecast_with_exog(
self,
time_stamps: List[int],
time_series_prev: pd.DataFrame = None,
return_prev=False,
exog_data: pd.DataFrame = None,
exog_data_prev: pd.DataFrame = None,
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
def _forecast(
self, time_stamps: List[int], time_series_prev: pd.DataFrame = None, return_prev=False
) -> Tuple[pd.DataFrame, Optional[pd.DataFrame]]:
def _is_forecaster_attr(base_model, attr):
is_member = isinstance(base_model, ForecasterBase) and attr in _FORECASTER_MEMBERS
return is_member or (isinstance(base_model, ForecasterExogBase) and attr in _FORECASTER_EXOG_MEMBERS) | null |
289 | from typing import Sequence
import numpy as np
import pandas as pd
try:
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
except ImportError as e:
err = (
"Try installing Merlion with optional dependencies using `pip install salesforce-merlion[deep-learning]` or "
"`pip install `salesforce-merlion[all]`"
)
raise ImportError(str(e) + ". " + err)
from merlion.models.base import NormalizingConfig
from merlion.models.anomaly.base import DetectorBase, DetectorConfig
from merlion.post_process.threshold import AggregateAlarms
from merlion.utils.misc import ProgressBar, initializer
from merlion.models.utils.rolling_window_dataset import RollingWindowDataset
The provided code snippet includes necessary dependencies for implementing the `build_hidden_layers` function. Write a Python function `def build_hidden_layers(input_size, hidden_sizes, dropout_rate, activation)` to solve the following problem:
:meta private:
Here is the function:
def build_hidden_layers(input_size, hidden_sizes, dropout_rate, activation):
"""
:meta private:
"""
hidden_layers = []
for i in range(len(hidden_sizes)):
s = input_size if i == 0 else hidden_sizes[i - 1]
hidden_layers.append(nn.Linear(s, hidden_sizes[i]))
hidden_layers.append(activation())
hidden_layers.append(nn.Dropout(dropout_rate))
return torch.nn.Sequential(*hidden_layers) | :meta private: |
290 | import logging
import math
import numpy as np
import pandas as pd
from merlion.models.anomaly.base import DetectorConfig, DetectorBase
from merlion.post_process.threshold import AdaptiveAggregateAlarms
from merlion.transform.moving_average import DifferenceTransform
from merlion.utils import UnivariateTimeSeries, TimeSeries, to_timestamp
The provided code snippet includes necessary dependencies for implementing the `param_init` function. Write a Python function `def param_init(module, init="ortho")` to solve the following problem:
MLP parameter initialization function :meta private:
Here is the function:
def param_init(module, init="ortho"):
"""
MLP parameter initialization function
:meta private:
"""
for m in module.modules():
if isinstance(m, nn.Conv2d): # or isinstance(m, nn.Linear):
# print('Update init of ', m)
if init == "he":
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2.0 / n))
elif init == "ortho":
nn.init.orthogonal_(m.weight)
if isinstance(m, nn.Linear):
nn.init.normal_(m.weight, mean=0, std=0.0001)
# n = m.weight.size(1)
# m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
# print('Update init of ', m)
m.weight.data.fill_(1)
m.bias.data.zero_() | MLP parameter initialization function :meta private: |
291 | import logging
import math
import numpy as np
import pandas as pd
try:
import torch
import torch.nn as nn
import torch.utils.data as data
except ImportError as e:
err = (
"Try installing Merlion with optional dependencies using `pip install salesforce-merlion[deep-learning]` or "
"`pip install `salesforce-merlion[all]`"
)
raise ImportError(str(e) + ". " + err)
from merlion.models.anomaly.base import DetectorConfig, DetectorBase
from merlion.post_process.threshold import AdaptiveAggregateAlarms
from merlion.transform.moving_average import DifferenceTransform
from merlion.utils import UnivariateTimeSeries, TimeSeries, to_timestamp
class MLPNet(nn.Module):
"""
MLP network architecture
:meta private:
"""
def __init__(self, dim_inp=None, dim_out=None, nhiddens=(400, 400, 400), bn=True):
super().__init__()
self.dim_inp = dim_inp
self.layers = nn.ModuleList([])
for i in range(len(nhiddens)):
if i == 0:
layer = nn.Linear(dim_inp, nhiddens[i], bias=False)
else:
layer = nn.Linear(nhiddens[i - 1], nhiddens[i], bias=False)
self.layers.append(layer)
bn_layer = nn.BatchNorm1d(nhiddens[i]) if bn else nn.Sequential()
relu_layer = nn.ReLU(inplace=True)
self.layers.extend([bn_layer, relu_layer])
fc = nn.Linear(nhiddens[-1], dim_out, bias=True)
self.layers.append(fc)
self.net = nn.Sequential(*self.layers)
self.nhiddens = nhiddens
param_init(self)
def forward(self, x, logit=False):
x = x.view(-1, self.dim_inp)
x = self.net(x)
# out = torch.nn.Softmax(dim=1)(x)
# if logit:
# return x
return x
The provided code snippet includes necessary dependencies for implementing the `get_dnn_loss_as_anomaly_score` function. Write a Python function `def get_dnn_loss_as_anomaly_score(tensor_x, tensor_y, use_cuda=True)` to solve the following problem:
train an MLP using Adam optimizer for 20 iteration on the training data provided :meta private:
Here is the function:
def get_dnn_loss_as_anomaly_score(tensor_x, tensor_y, use_cuda=True):
"""
train an MLP using Adam optimizer for 20 iteration on the training data provided
:meta private:
"""
BS = tensor_x.size(0)
LR = 0.001
max_epochs = 20
model = MLPNet(dim_inp=1, dim_out=tensor_y.size(1), nhiddens=[400, 400, 400], bn=True)
if use_cuda:
model = model.cuda()
epoch = 0
# optimizer = torch.optim.SGD(model.parameters(), lr=LR, momentum=0.9, weight_decay=0)
optimizer = torch.optim.Adam(model.parameters(), lr=LR, weight_decay=1e-3)
my_dataset = data.TensorDataset(tensor_x, tensor_y) # create your datset
my_dataloader = data.DataLoader(my_dataset, batch_size=BS, shuffle=False, num_workers=0, pin_memory=True)
# with tqdm.tqdm(total=max_epochs) as pbar:
while epoch < max_epochs:
# pbar.update(1)
epoch += 1
for x, y in my_dataloader:
if use_cuda:
x, y = x.cuda(), y.cuda()
y_ = model(x)
loss = ((y - y_) ** 2).sum(1).mean()
optimizer.zero_grad()
loss.backward()
optimizer.step()
for x, y in my_dataloader:
if use_cuda:
x, y = x.cuda(), y.cuda()
y_ = model(x)
loss = ((y - y_) ** 2).sum(1).view(-1)
return loss.data.view(-1).cpu().numpy() | train an MLP using Adam optimizer for 20 iteration on the training data provided :meta private: |
292 | import logging
import math
import numpy as np
import pandas as pd
from merlion.models.anomaly.base import DetectorConfig, DetectorBase
from merlion.post_process.threshold import AdaptiveAggregateAlarms
from merlion.transform.moving_average import DifferenceTransform
from merlion.utils import UnivariateTimeSeries, TimeSeries, to_timestamp
The provided code snippet includes necessary dependencies for implementing the `normalize_data` function. Write a Python function `def normalize_data(x)` to solve the following problem:
normalize data to have 0 mean and unit variance :meta private:
Here is the function:
def normalize_data(x):
"""
normalize data to have 0 mean and unit variance
:meta private:
"""
mn = np.mean(x, axis=0, keepdims=True)
sd = np.std(x, axis=0, keepdims=True)
return (x - mn) / (sd + 1e-7) | normalize data to have 0 mean and unit variance :meta private: |
293 | import functools
import logging
import time
import warnings
import numpy as np
from numpy.linalg import LinAlgError
import statsmodels.api as sm
logger = logging.getLogger(__name__)
def _model_name(model_spec):
"""
Return model name
"""
p, d, q = model_spec.order
P, D, Q, m = model_spec.seasonal_order
return " SARIMA({p},{d},{q})({P},{D},{Q})[{m}] {constant_trend}".format(
p=p,
d=d,
q=q,
P=P,
D=D,
Q=Q,
m=m,
constant_trend=" with constant" if model_spec.trend is not None else "without constant",
)
def _root_test(model_fit, ic):
"""
Check the roots of the sarima model, and set IC to inf if the roots are
near non-invertible.
"""
# This is identical to the implementation of pmdarima and forecast
max_invroot = 0
p, d, q = model_fit.model.order
P, D, Q, m = model_fit.model.seasonal_order
if p + P > 0:
max_invroot = max(0, *np.abs(1 / model_fit.arroots))
if q + Q > 0 and np.isfinite(ic):
max_invroot = max(0, *np.abs(1 / model_fit.maroots))
if max_invroot > 1 - 1e-2:
ic = np.inf
logger.debug(
"Near non-invertible roots for order "
"(%i, %i, %i)(%i, %i, %i, %i); setting score to inf (at "
"least one inverse root too close to the border of the "
"unit circle: %.3f)" % (p, d, q, P, D, Q, m, max_invroot)
)
return ic
The provided code snippet includes necessary dependencies for implementing the `_fit_sarima_model` function. Write a Python function `def _fit_sarima_model(y, order, seasonal_order, trend, method, maxiter, information_criterion, exog=None, **kwargs)` to solve the following problem:
Train a sarima model with the given time-series and hyperparamteres tuple. Return the trained model, training time and information criterion
Here is the function:
def _fit_sarima_model(y, order, seasonal_order, trend, method, maxiter, information_criterion, exog=None, **kwargs):
"""
Train a sarima model with the given time-series and hyperparamteres tuple.
Return the trained model, training time and information criterion
"""
start = time.time()
ic = np.inf
model_fit = None
with warnings.catch_warnings():
warnings.simplefilter("ignore")
model_spec = sm.tsa.SARIMAX(
endog=y,
exog=exog,
order=order,
seasonal_order=seasonal_order,
trend=trend,
validate_specification=False,
enforce_stationarity=False,
enforce_invertibility=False,
**kwargs,
)
try:
model_fit = model_spec.fit(method=method, maxiter=maxiter, disp=0)
except (LinAlgError, ValueError) as v:
logger.warning(f"Caught exception {type(v).__name__}: {str(v)}")
else:
ic = model_fit.info_criteria(information_criterion)
ic = _root_test(model_fit, ic)
fit_time = time.time() - start
logger.debug(
"{model} : {ic_name}={ic:.3f}, Time={time:.2f} sec".format(
model=_model_name(model_spec), ic_name=information_criterion.upper(), ic=ic, time=fit_time
)
)
return model_fit, fit_time, ic | Train a sarima model with the given time-series and hyperparamteres tuple. Return the trained model, training time and information criterion |
294 | import functools
import logging
import time
import warnings
import numpy as np
from numpy.linalg import LinAlgError
import statsmodels.api as sm
logger = logging.getLogger(__name__)
def _model_name(model_spec):
"""
Return model name
"""
p, d, q = model_spec.order
P, D, Q, m = model_spec.seasonal_order
return " SARIMA({p},{d},{q})({P},{D},{Q})[{m}] {constant_trend}".format(
p=p,
d=d,
q=q,
P=P,
D=D,
Q=Q,
m=m,
constant_trend=" with constant" if model_spec.trend is not None else "without constant",
)
def _root_test(model_fit, ic):
"""
Check the roots of the sarima model, and set IC to inf if the roots are
near non-invertible.
"""
# This is identical to the implementation of pmdarima and forecast
max_invroot = 0
p, d, q = model_fit.model.order
P, D, Q, m = model_fit.model.seasonal_order
if p + P > 0:
max_invroot = max(0, *np.abs(1 / model_fit.arroots))
if q + Q > 0 and np.isfinite(ic):
max_invroot = max(0, *np.abs(1 / model_fit.maroots))
if max_invroot > 1 - 1e-2:
ic = np.inf
logger.debug(
"Near non-invertible roots for order "
"(%i, %i, %i)(%i, %i, %i, %i); setting score to inf (at "
"least one inverse root too close to the border of the "
"unit circle: %.3f)" % (p, d, q, P, D, Q, m, max_invroot)
)
return ic
The provided code snippet includes necessary dependencies for implementing the `_refit_sarima_model` function. Write a Python function `def _refit_sarima_model(model_fitted, approx_ic, method, inititer, maxiter, information_criterion)` to solve the following problem:
Re-train the the approximated sarima model which is used in approximation mode. Take the approximated model as initialization, fine tune with (maxiter - initier) rounds or multiple rounds until no improvement about information criterion Return the trained model
Here is the function:
def _refit_sarima_model(model_fitted, approx_ic, method, inititer, maxiter, information_criterion):
"""
Re-train the the approximated sarima model which is used in approximation mode.
Take the approximated model as initialization, fine tune with (maxiter - initier)
rounds or multiple rounds until no improvement about information criterion
Return the trained model
"""
start = time.time()
with warnings.catch_warnings():
warnings.simplefilter("ignore")
best_fit = model_fitted
ic = approx_ic
logger.debug(
"Initial Model: {model} Iter={iter:d}, {ic_name}={ic:.3f}".format(
model=_model_name(model_fitted.model), iter=inititer, ic_name=information_criterion.upper(), ic=ic
)
)
for cur_iter in range(inititer + 1, maxiter + 1):
try:
model_fitted = model_fitted.model.fit(
method=method, maxiter=1, disp=0, start_params=model_fitted.params
)
except (LinAlgError, ValueError) as v:
logger.warning(f"Caught exception {type(v).__name__}: {str(v)}")
else:
cur_ic = model_fitted.info_criteria(information_criterion)
cur_ic = _root_test(model_fitted, cur_ic)
if cur_ic > ic or np.isinf(cur_ic):
break
else:
ic = cur_ic
best_fit = model_fitted
fit_time = time.time() - start
logger.debug(
"{model} : Iter={iter:d}, {ic_name}={ic:.3f}, Time={time:.2f} sec".format(
model=_model_name(model_fitted.model),
iter=cur_iter,
ic_name=information_criterion.upper(),
ic=ic,
time=fit_time,
)
)
return best_fit | Re-train the the approximated sarima model which is used in approximation mode. Take the approximated model as initialization, fine tune with (maxiter - initier) rounds or multiple rounds until no improvement about information criterion Return the trained model |
295 | import functools
import logging
import time
import warnings
import numpy as np
from numpy.linalg import LinAlgError
import statsmodels.api as sm
logger = logging.getLogger(__name__)
def _model_name(model_spec):
"""
Return model name
"""
p, d, q = model_spec.order
P, D, Q, m = model_spec.seasonal_order
return " SARIMA({p},{d},{q})({P},{D},{Q})[{m}] {constant_trend}".format(
p=p,
d=d,
q=q,
P=P,
D=D,
Q=Q,
m=m,
constant_trend=" with constant" if model_spec.trend is not None else "without constant",
)
def _root_test(model_fit, ic):
"""
Check the roots of the sarima model, and set IC to inf if the roots are
near non-invertible.
"""
# This is identical to the implementation of pmdarima and forecast
max_invroot = 0
p, d, q = model_fit.model.order
P, D, Q, m = model_fit.model.seasonal_order
if p + P > 0:
max_invroot = max(0, *np.abs(1 / model_fit.arroots))
if q + Q > 0 and np.isfinite(ic):
max_invroot = max(0, *np.abs(1 / model_fit.maroots))
if max_invroot > 1 - 1e-2:
ic = np.inf
logger.debug(
"Near non-invertible roots for order "
"(%i, %i, %i)(%i, %i, %i, %i); setting score to inf (at "
"least one inverse root too close to the border of the "
"unit circle: %.3f)" % (p, d, q, P, D, Q, m, max_invroot)
)
return ic
The provided code snippet includes necessary dependencies for implementing the `detect_maxiter_sarima_model` function. Write a Python function `def detect_maxiter_sarima_model(y, d, D, m, method, information_criterion, exog=None, **kwargs)` to solve the following problem:
run a zero model with SARIMA(2; d; 2)(1; D; 1) / ARIMA(2; d; 2) determine the optimal maxiter
Here is the function:
def detect_maxiter_sarima_model(y, d, D, m, method, information_criterion, exog=None, **kwargs):
"""
run a zero model with SARIMA(2; d; 2)(1; D; 1) / ARIMA(2; d; 2) determine the optimal maxiter
"""
logger.debug("Automatically detect the maxiter")
order = (2, d, 2)
if m == 1:
seasonal_order = (0, 0, 0, 0)
else:
seasonal_order = (1, D, 1, m)
# default setting of maxiter is 10
start = time.time()
fit_time = np.nan
maxiter = 10
ic = np.inf
model_spec = sm.tsa.SARIMAX(
endog=y,
exog=exog,
order=order,
seasonal_order=seasonal_order,
trend="c",
validate_specification=False,
enforce_stationarity=False,
enforce_invertibility=False,
)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
try:
model_fit = model_spec.fit(method=method, maxiter=maxiter, disp=0)
except (LinAlgError, ValueError) as v:
logger.warning(f"Caught exception {type(v).__name__}: {str(v)}")
return maxiter
else:
ic = model_fit.info_criteria(information_criterion)
ic = _root_test(model_fit, ic)
for cur_iter in range(maxiter + 1, 51):
try:
model_fit = model_fit.model.fit(method=method, maxiter=1, disp=0, start_params=model_fit.params)
except (LinAlgError, ValueError) as v:
logger.warning(f"Caught exception {type(v).__name__}: {str(v)}")
else:
cur_ic = model_fit.info_criteria(information_criterion)
cur_ic = _root_test(model_fit, cur_ic)
if cur_ic > ic or np.isinf(cur_ic):
break
else:
ic = cur_ic
maxiter = cur_iter
logger.debug(
"Zero model: {model} Iter={iter:d}, {ic_name}={ic:.3f}".format(
model=_model_name(model_fit.model),
iter=maxiter,
ic_name=information_criterion.upper(),
ic=ic,
)
)
fit_time = time.time() - start
logger.debug(
"Zero model: {model} Iter={iter:d}, {ic_name}={ic:.3f}, Time={time:.2f} sec".format(
model=_model_name(model_fit.model),
iter=maxiter,
ic_name=information_criterion.upper(),
ic=ic,
time=fit_time,
)
)
logger.info(f"Automatically detect the maxiter is {maxiter}")
return maxiter | run a zero model with SARIMA(2; d; 2)(1; D; 1) / ARIMA(2; d; 2) determine the optimal maxiter |
296 | import functools
import logging
import time
import warnings
import numpy as np
from numpy.linalg import LinAlgError
import statsmodels.api as sm
def diff(x, lag=1, differences=1):
"""
Return suitably lagged and iterated differences from the given 1D or 2D array x
"""
n = x.shape[0]
if any(v < 1 for v in (lag, differences)):
raise ValueError("lag and differences must be positive (> 0) integers")
if lag >= n:
raise ValueError("lag should be smaller than the length of array")
if differences >= n:
raise ValueError("differences should be smaller than the length of array")
res = x
for i in range(differences):
if res.ndim == 1: # compute the lag for vector
res = res[lag : res.shape[0]] - res[: res.shape[0] - lag]
else:
res = res[lag : res.shape[0], :] - res[: res.shape[0] - lag, :]
return res
def seas_seasonalstationaritytest(x, m):
"""
Estimate the strength of seasonal component. The idea can be found in
https://otexts.com/fpp2/seasonal-strength.html
R implementation uses mstl instead of stl to deal with multiple seasonality
"""
stlfit = sm.tsa.STL(x, m).fit()
vare = np.nanvar(stlfit.resid)
season = max(0, min(1, 1 - vare / np.nanvar(stlfit.resid + stlfit.seasonal)))
return season > 0.64
The provided code snippet includes necessary dependencies for implementing the `nsdiffs` function. Write a Python function `def nsdiffs(x, m, max_D=1, test="seas")` to solve the following problem:
Estimate the seasonal differencing order D with statistical test Parameters: x : the time series to difference m : the number of seasonal periods max_D : the maximal number of seasonal differencing order allowed test: the type of test of seasonality to use to detect seasonal periodicity
Here is the function:
def nsdiffs(x, m, max_D=1, test="seas"):
"""
Estimate the seasonal differencing order D with statistical test
Parameters:
x : the time series to difference
m : the number of seasonal periods
max_D : the maximal number of seasonal differencing order allowed
test: the type of test of seasonality to use to detect seasonal periodicity
"""
D = 0
if max_D <= 0:
raise ValueError("max_D must be a positive integer")
if np.max(x) == np.min(x) or m < 2:
return D
if test == "seas":
dodiff = seas_seasonalstationaritytest(x, m)
while dodiff and D < max_D:
D += 1
x = diff(x, lag=m)
if np.max(x) == np.min(x):
return D
if len(x) >= 2 * m and D < max_D:
dodiff = seas_seasonalstationaritytest(x, m)
else:
dodiff = False
return D | Estimate the seasonal differencing order D with statistical test Parameters: x : the time series to difference m : the number of seasonal periods max_D : the maximal number of seasonal differencing order allowed test: the type of test of seasonality to use to detect seasonal periodicity |
297 | import functools
import logging
import time
import warnings
import numpy as np
from numpy.linalg import LinAlgError
import statsmodels.api as sm
def diff(x, lag=1, differences=1):
"""
Return suitably lagged and iterated differences from the given 1D or 2D array x
"""
n = x.shape[0]
if any(v < 1 for v in (lag, differences)):
raise ValueError("lag and differences must be positive (> 0) integers")
if lag >= n:
raise ValueError("lag should be smaller than the length of array")
if differences >= n:
raise ValueError("differences should be smaller than the length of array")
res = x
for i in range(differences):
if res.ndim == 1: # compute the lag for vector
res = res[lag : res.shape[0]] - res[: res.shape[0] - lag]
else:
res = res[lag : res.shape[0], :] - res[: res.shape[0] - lag, :]
return res
def KPSS_stationaritytest(xx, alpha=0.05):
"""
The KPSS test is used with the null hypothesis that
x has a stationary root against a unit-root alternative
The KPSS test is used with the null hypothesis that
x has a stationary root against a unit-root alternative.
Then the test returns the least number of differences required to
pass the test at the level alpha
"""
with warnings.catch_warnings():
warnings.simplefilter("ignore")
results = sm.tsa.stattools.kpss(xx, regression="c", nlags=round(3 * np.sqrt(len(xx)) / 13))
yout = results[1]
return yout, yout < alpha
The provided code snippet includes necessary dependencies for implementing the `ndiffs` function. Write a Python function `def ndiffs(x, alpha=0.05, max_d=2, test="kpss")` to solve the following problem:
Estimate the differencing order d with statistical test Parameters: x : the time series to difference alpha : level of the test, possible values range from 0.01 to 0.1 max_d : the maximal number of differencing order allowed test: the type of test of seasonality to use to detect seasonal periodicity
Here is the function:
def ndiffs(x, alpha=0.05, max_d=2, test="kpss"):
"""
Estimate the differencing order d with statistical test
Parameters:
x : the time series to difference
alpha : level of the test, possible values range from 0.01 to 0.1
max_d : the maximal number of differencing order allowed
test: the type of test of seasonality to use to detect seasonal periodicity
"""
d = 0
if max_d <= 0:
raise ValueError("max_d must be a positive integer")
if alpha < 0.01:
warnings.warn("Specified alpha value is less than the minimum, setting alpha=0.01")
alpha = 0.01
if alpha > 0.1:
warnings.warn("Specified alpha value is larger than the maximum, setting alpha=0.1")
alpha = 0.1
if np.max(x) == np.min(x):
return d
if test == "kpss":
pval, dodiff = KPSS_stationaritytest(x, alpha)
if np.isnan(pval):
return 0
while dodiff and d < max_d:
d += 1
x = diff(x)
if np.max(x) == np.min(x):
return d
pval, dodiff = KPSS_stationaritytest(x, alpha)
if np.isnan(pval):
return d - 1
return d | Estimate the differencing order d with statistical test Parameters: x : the time series to difference alpha : level of the test, possible values range from 0.01 to 0.1 max_d : the maximal number of differencing order allowed test: the type of test of seasonality to use to detect seasonal periodicity |
298 | from typing import List
import numpy as np
import pandas as pd
from pandas.tseries import offsets
from pandas.tseries.frequencies import to_offset
def time_features_from_frequency_str(freq_str: str) -> List[TimeFeature]:
"""
:param freq_str: Frequency string of the form [multiple][granularity] such as "12H", "5min", "1D" etc.
:return: a list of time features that will be appropriate for the given frequency string.
"""
features_by_offsets = {
offsets.YearEnd: [],
offsets.QuarterEnd: [MonthOfYear],
offsets.MonthEnd: [MonthOfYear],
offsets.Week: [DayOfMonth, WeekOfYear],
offsets.Day: [DayOfWeek, DayOfMonth, DayOfYear],
offsets.BusinessDay: [DayOfWeek, DayOfMonth, DayOfYear],
offsets.Hour: [HourOfDay, DayOfWeek, DayOfMonth, DayOfYear],
offsets.Minute: [
MinuteOfHour,
HourOfDay,
DayOfWeek,
DayOfMonth,
DayOfYear,
],
offsets.Second: [
SecondOfMinute,
MinuteOfHour,
HourOfDay,
DayOfWeek,
DayOfMonth,
DayOfYear,
],
}
offset = to_offset(freq_str)
for offset_type, feature_classes in features_by_offsets.items():
if isinstance(offset, offset_type):
return [cls() for cls in feature_classes]
supported_freq_msg = f"""
Unsupported frequency {freq_str}
The following frequencies are supported:
Y - yearly
alias: A
M - monthly
W - weekly
D - daily
B - business days
H - hourly
T - minutely
alias: min
S - secondly
"""
raise RuntimeError(supported_freq_msg)
The provided code snippet includes necessary dependencies for implementing the `get_time_features` function. Write a Python function `def get_time_features(dates: pd.DatetimeIndex, ts_encoding: str = "h")` to solve the following problem:
Convert pandas Datetime to numerical vectors that can be used for training
Here is the function:
def get_time_features(dates: pd.DatetimeIndex, ts_encoding: str = "h"):
"""
Convert pandas Datetime to numerical vectors that can be used for training
"""
features = np.vstack([feat(dates) for feat in time_features_from_frequency_str(ts_encoding)])
return features.transpose(1, 0) | Convert pandas Datetime to numerical vectors that can be used for training |
299 | import os
import math
import numpy as np
try:
import torch
import torch.nn as nn
import torch.fft as fft
import torch.nn.functional as F
from einops import rearrange, reduce, repeat
except ImportError as e:
err = (
"Try installing Merlion with optional dependencies using `pip install salesforce-merlion[deep-learning]` or "
"`pip install `salesforce-merlion[all]`"
)
raise ImportError(str(e) + ". " + err)
from math import sqrt
from scipy.fftpack import next_fast_len
def conv1d_fft(f, g, dim=-1):
N = f.size(dim)
M = g.size(dim)
fast_len = next_fast_len(N + M - 1)
F_f = fft.rfft(f, fast_len, dim=dim)
F_g = fft.rfft(g, fast_len, dim=dim)
F_fg = F_f * F_g.conj()
out = fft.irfft(F_fg, fast_len, dim=dim)
out = out.roll((-1,), dims=(dim,))
idx = torch.as_tensor(range(fast_len - N, fast_len)).to(out.device)
out = out.index_select(dim, idx)
return out | null |