Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 new columns ({'SQL', 'db_id', 'pandas_query'}) and 8 missing columns ({'tensorflow_start_code', 'pytorch_sol_code', 'tensorflow_test_code', 'tensorflow_library', 'pytorch_test_code', 'tensorflow_sol_code', 'pytorch_start_code', 'pytorch_library'}).

This happened while the json dataset builder was generating data using

hf://datasets/xia01ongLi/DS-CodeBridge/data/dq300-00000-of-00001.jsonl (at revision 043881ae521c4aaa5651aa7f559d0ab2435f5033)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              question_id: int64
              db_id: string
              SQL: string
              pandas_query: string
              to
              {'question_id': Value(dtype='int64', id=None), 'pytorch_library': Value(dtype='string', id=None), 'pytorch_start_code': Value(dtype='string', id=None), 'pytorch_sol_code': Value(dtype='string', id=None), 'pytorch_test_code': {'setup_code': Value(dtype='string', id=None), 'test_cases': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'tensorflow_library': Value(dtype='string', id=None), 'tensorflow_start_code': Value(dtype='string', id=None), 'tensorflow_sol_code': Value(dtype='string', id=None), 'tensorflow_test_code': {'setup_code': Value(dtype='string', id=None), 'test_cases': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1433, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 3 new columns ({'SQL', 'db_id', 'pandas_query'}) and 8 missing columns ({'tensorflow_start_code', 'pytorch_sol_code', 'tensorflow_test_code', 'tensorflow_library', 'pytorch_test_code', 'tensorflow_sol_code', 'pytorch_start_code', 'pytorch_library'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/xia01ongLi/DS-CodeBridge/data/dq300-00000-of-00001.jsonl (at revision 043881ae521c4aaa5651aa7f559d0ab2435f5033)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

question_id
int64
pytorch_library
string
pytorch_start_code
string
pytorch_sol_code
string
pytorch_test_code
dict
tensorflow_library
string
tensorflow_start_code
string
tensorflow_sol_code
string
tensorflow_test_code
dict
0
import torch import numpy as np
np_array = np.array([1, 2, 3]) #result =
result = torch.from_numpy(np_array)
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), np_array)" ] }
import tensorflow as tf import numpy as np
np_array = np.array([1, 2, 3]) #result =
result = tf.convert_to_tensor(np_array)
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), np_array)" ] }
1
import torch
#result =
result = torch.tensor(3.43)
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.isclose(result.item(), 3.43)" ] }
import tensorflow as tf
#result =
result = tf.constant(3.43)
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.isclose(result.numpy(), 3.43)" ] }
2
import torch
#result =
result = torch.arange(1, 10).view(3, 3)
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.allclose(result.numpy(), np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]))" ] }
import tensorflow as tf
#result =
result = tf.reshape(tf.range(1, 10), (3, 3))
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.array_equal(result, np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]))" ] }
3
import torch
#result =
input_tensor = torch.arange(1, 6) result = torch.diag(input_tensor)
{ "setup_code": "import numpy as np", "test_cases": [ "expected_result = np.diag(np.arange(1, 6))\nassert np.allclose(result.numpy(), expected_result), f'Expected {expected_result}, but got {result.numpy()}'" ] }
import tensorflow as tf
#result =
input_tensor = tf.range(1, 6) result = tf.linalg.diag(input_tensor)
{ "setup_code": "import numpy as np", "test_cases": [ "expected_result = np.diag(np.arange(1, 6))\nassert np.allclose(result.numpy(), expected_result), f'Expected {expected_result}, but got {result.numpy()}'" ] }
4
import torch
#result =
result = torch.eye(4)
{ "setup_code": "import numpy as np", "test_cases": [ "expected_result = np.eye(4)\nassert np.allclose(result.numpy(), expected_result), f'Expected {expected_result}, but got {result.numpy()}'" ] }
import tensorflow as tf
#result =
result = tf.eye(4)
{ "setup_code": "import numpy as np", "test_cases": [ "expected_result = np.eye(4)\nassert np.allclose(result.numpy(), expected_result), f'Expected {expected_result}, but got {result.numpy()}'" ] }
5
import torch
tensor1 = torch.tensor([1, 2, 3]) tensor2 = torch.tensor([4, 5, 6]) #result =
result = tensor1 + tensor2
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.allclose(result.numpy(), np.array([5, 7, 9]))" ] }
import tensorflow as tf import numpy as np
tensor1 = tf.constant([1, 2, 3]) tensor2 = tf.constant([4, 5, 6]) #result =
result = tensor1 + tensor2
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.allclose(result.numpy(), np.array([5, 7, 9]))" ] }
6
import torch
tensor1 = torch.tensor([1, 2, 3]) tensor2 = torch.tensor([4, 5, 6]) #result =
result = tensor1 - tensor2
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.allclose(result.numpy(), np.array([-3, -3, -3]))" ] }
import tensorflow as tf import numpy as np
tensor1 = tf.constant([1, 2, 3]) tensor2 = tf.constant([4, 5, 6]) #result =
result = tensor1 - tensor2
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.allclose(result.numpy(), np.array([-3, -3, -3]))" ] }
7
import torch
tensor1 = torch.tensor([[1, 2], [3, 4]]) tensor2 = torch.tensor([[5, 6], [7, 8]]) #result =
result = torch.mm(tensor1, tensor2)
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.allclose(result.numpy(), np.array([[19, 22], [43, 50]]))" ] }
import tensorflow as tf import numpy as np
tensor1 = tf.constant([[1, 2], [3, 4]]) tensor2 = tf.constant([[5, 6], [7, 8]]) #result =
result = tf.matmul(tensor1, tensor2)
{ "setup_code": "import numpy as np", "test_cases": [ "assert np.allclose(result.numpy(), np.array([[19, 22], [43, 50]]))" ] }
8
import torch import numpy as np
#result =
result = torch.zeros(5, 6)
{ "setup_code": "", "test_cases": [ "assert torch.all(result == 0).item()" ] }
import tensorflow as tf import numpy as np
#result =
result = tf.zeros([5, 6])
{ "setup_code": "", "test_cases": [ "assert tf.reduce_all(result == 0).numpy()" ] }
9
import torch import numpy as np
tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) #result =
result = tensor.shape
{ "setup_code": "", "test_cases": [ "assert result == (2, 3)" ] }
import tensorflow as tf import numpy as np
tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) #result =
result = tensor.shape
{ "setup_code": "", "test_cases": [ "assert tuple(result.as_list()) == (2, 3)" ] }
10
import torch import numpy as np
tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) #result =
result = tensor.dim()
{ "setup_code": "", "test_cases": [ "assert result == 2" ] }
import tensorflow as tf import numpy as np
tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) #result =
result = tf.rank(tensor)
{ "setup_code": "", "test_cases": [ "assert result.numpy() == 2" ] }
11
import torch import numpy as np
tensor = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) #result =
result = tensor[1:, 1:]
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), np.array([[5, 6], [8, 9]]))" ] }
import tensorflow as tf import numpy as np
tensor = tf.constant([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) #result =
result = tensor[1:, 1:]
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), np.array([[5, 6], [8, 9]]))" ] }
12
import torch import numpy as np
tensor = torch.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]) #result =
result = tensor.numpy()
{ "setup_code": "", "test_cases": [ "assert isinstance(result, np.ndarray) and result.shape == (2, 2, 3)" ] }
import tensorflow as tf import numpy as np
tensor = tf.constant([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]) #result =
result = tensor.numpy()
{ "setup_code": "", "test_cases": [ "assert isinstance(result, np.ndarray) and result.shape == (2, 2, 3)" ] }
13
import torch import numpy as np
tensor = torch.tensor([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]) #result =
result = tensor.view(2, 6)
{ "setup_code": "", "test_cases": [ "assert result.shape == (2, 6)" ] }
import tensorflow as tf import numpy as np
tensor = tf.constant([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]) #result =
result = tf.reshape(tensor, (2, 6))
{ "setup_code": "", "test_cases": [ "result = tf.reshape(tensor, (2, 6))" ] }
14
import torch from torch.autograd import Variable import numpy as np
#result =
result = Variable(torch.randn(4, 6), requires_grad=True)
{ "setup_code": "", "test_cases": [ "assert result.shape == (4, 6) and isinstance(result, torch.Tensor) and result.requires_grad" ] }
import tensorflow as tf import numpy as np
#result =
result = tf.Variable(tf.random.normal([4, 6]))
{ "setup_code": "", "test_cases": [ "assert result.shape == (4, 6) and isinstance(result, tf.Variable)" ] }
15
import torch from torch.autograd import Variable import numpy as np
x = Variable(torch.tensor(3.0), requires_grad=True) #result =
y = x ** 2 y.backward() result = x.grad
{ "setup_code": "", "test_cases": [ "assert np.isclose(result.item(), 6.0)" ] }
import tensorflow as tf import numpy as np
x = tf.Variable(3.0) #result =
with tf.GradientTape() as tape: y = x ** 2 result = tape.gradient(y, x)
{ "setup_code": "", "test_cases": [ "assert np.isclose(result.numpy(), 6.0)" ] }
16
import torch from torch.autograd import Variable import numpy as np
x = Variable(torch.tensor(10.0), requires_grad=True) #result =
y = (x - 5) ** 2 y.backward() with torch.no_grad(): x -= 0.1 * x.grad result = x
{ "setup_code": "", "test_cases": [ "assert np.isclose(result.item(), 9.0)" ] }
import tensorflow as tf import numpy as np
x = tf.Variable(10.0) #result =
with tf.GradientTape() as tape: tape.watch(x) y = (x - 5) ** 2 grad = tape.gradient(y, x) x.assign(x - 0.1 * grad) result = x
{ "setup_code": "", "test_cases": [ "assert np.isclose(result.numpy(), 9.0)" ] }
17
import torch import numpy as np
a = torch.tensor([2., 3.], requires_grad=True) b = torch.tensor([6., 4.], requires_grad=True) #result =
Q = 3 * a ** 3 - b ** 2 Q.backward(torch.tensor([1., 1.])) dQ_da = a.grad dQ_db = b.grad result = [dQ_da, dQ_db]
{ "setup_code": "", "test_cases": [ "assert np.allclose(result[0].numpy(), [36., 81.]) and np.allclose(result[1].numpy(), [-12., -8.])" ] }
import tensorflow as tf import numpy as np
a = tf.Variable([2., 3.], dtype=tf.float32) b = tf.Variable([6., 4.], dtype=tf.float32) #result =
with tf.GradientTape() as tape: tape.watch([a, b]) Q = 3 * a ** 3 - b ** 2 grads = tape.gradient(Q, [a, b]) result = grads
{ "setup_code": "", "test_cases": [ "assert np.allclose(result[0].numpy(), [36., 81.]) and np.allclose(result[1].numpy(), [-12., -8.])" ] }
18
import torch from torchvision import datasets, transforms import numpy as np
#train_dataset =
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transforms.ToTensor())
{ "setup_code": "", "test_cases": [ "assert isinstance(train_dataset, torch.utils.data.Dataset)\nassert len(train_dataset) == 60000" ] }
import tensorflow as tf import numpy as np import tensorflow_datasets as tfds
#train_dataset =
train_dataset = tfds.load('mnist', split='train', shuffle_files=True)
{ "setup_code": "", "test_cases": [ "assert isinstance(train_dataset, tf.data.Dataset)\nassert len(list(train_dataset)) == 60000" ] }
19
import torch import numpy as np
tensor = torch.tensor([-1.0, 0, 1.0, 5.0], dtype=torch.float32) #result =
result = torch.relu(tensor)
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), [0, 0, 1.0, 5.0], atol=1e-5)" ] }
import tensorflow as tf import numpy as np
tensor = tf.constant([-1.0, 0, 1.0, 5.0], dtype=tf.float32) #result =
result = tf.nn.relu(tensor)
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), [0, 0, 1.0, 5.0], atol=1e-5)" ] }
20
import torch import numpy as np
tensor = torch.tensor([-1.0, 0, 1.0, 5.0, 6.5], dtype=torch.float32) #result =
result = torch.nn.functional.relu6(tensor)
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), [0, 0, 1.0, 5.0, 6.0], atol=1e-5)" ] }
import tensorflow as tf import numpy as np
tensor = tf.constant([-1.0, 0, 1.0, 5.0, 6.5], dtype=tf.float32) #result =
result = tf.nn.relu6(tensor)
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), [0, 0, 1.0, 5.0, 6.0], atol=1e-5)" ] }
21
import torch import numpy as np
tensor = torch.tensor([-1.0, 0, 1.0, 5.0], dtype=torch.float32) #result =
result = torch.sigmoid(tensor)
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), [0.26894143, 0.5, 0.7310586, 0.9933072 ], atol=1e-5)" ] }
import tensorflow as tf import numpy as np
tensor = tf.constant([-1.0, 0, 1.0, 5.0], dtype=tf.float32) #result =
result = tf.nn.sigmoid(tensor)
{ "setup_code": "", "test_cases": [ "assert np.allclose(result.numpy(), [0.26894143, 0.5, 0.7310586, 0.9933072 ], atol=1e-5)" ] }
22
import torch import torch.nn as nn
#class NeuralNetwork(nn.Module): #model = NeuralNetwork()
class NeuralNetwork(nn.Module): def __init__(self): super().__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28 * 28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10), ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork()
{ "setup_code": "", "test_cases": [ "assert isinstance(model, nn.Module) and len(list(model.children())) == 2\nassert isinstance(model.linear_relu_stack[0], nn.Linear)\nassert model.linear_relu_stack[0].out_features == 512 and model.linear_relu_stack[2].out_features == 512 and model.linear_relu_stack[4].out_features == 10\n" ] }
import tensorflow as tf import numpy as np
#model =
model = tf.keras.Sequential( [ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation="relu"), tf.keras.layers.Dense(512, activation="relu"), tf.keras.layers.Dense(10), ] )
{ "setup_code": "", "test_cases": [ "assert isinstance(model, tf.keras.Sequential) and len(model.layers) == 4\n", "assert isinstance(model.layers[1], tf.keras.layers.Dense)\n", "assert model.layers[1].units == 512 and model.layers[2].units == 512 and model.layers[3].units == 10\n" ] }
23
import torch import torch.nn as nn import torch.optim as optim
class NeuralNetwork(nn.Module): def __init__(self): super().__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28 * 28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10), ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork() # optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
{ "setup_code": "", "test_cases": [ "assert isinstance(optimizer, optim.SGD) and optimizer.param_groups[0]['lr'] == 0.01\n" ] }
import tensorflow as tf import numpy as np
model = tf.keras.Sequential( [ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation="relu"), tf.keras.layers.Dense(512, activation="relu"), tf.keras.layers.Dense(10), ] ) #optimizer
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
{ "setup_code": "", "test_cases": [ "assert isinstance(optimizer, tf.keras.optimizers.SGD) and np.isclose(optimizer.learning_rate.numpy(), 0.01)\n" ] }
24
import torch import torch.nn as nn
# loss_fn =
loss_fn = nn.CrossEntropyLoss()
{ "setup_code": "", "test_cases": [ "assert isinstance(loss_fn, nn.CrossEntropyLoss) and loss_fn.reduction == 'mean'\n" ] }
import tensorflow as tf import numpy as np
# loss_fn =
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
{ "setup_code": "", "test_cases": [ "assert isinstance(loss_fn, tf.keras.losses.SparseCategoricalCrossentropy) and loss_fn.from_logits == True\n" ] }
25
import torch import torch.nn as nn import torch.optim as optim
class NeuralNetwork(nn.Module): def __init__(self): super().__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28 * 28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10), ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork() # optimizer
optimizer = optim.Adam(model.parameters(), lr=0.001)
{ "setup_code": "", "test_cases": [ "assert isinstance(optimizer, optim.Adam) and optimizer.param_groups[0]['lr'] == 0.001\n" ] }
import tensorflow as tf import numpy as np
model = tf.keras.Sequential( [ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation="relu"), tf.keras.layers.Dense(512, activation="relu"), tf.keras.layers.Dense(10), ] ) #optimizer =
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
{ "setup_code": "", "test_cases": [ "assert isinstance(optimizer, tf.keras.optimizers.Adam) and np.isclose(optimizer.learning_rate.numpy(), 0.001)\n" ] }
26
import torch
tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) #result =
result = torch.max(tensor).item()
{ "setup_code": "", "test_cases": [ "assert result == 6\n" ] }
import tensorflow as tf
tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) #result =
result = tf.reduce_max(tensor).numpy()
{ "setup_code": "", "test_cases": [ "assert result == 6\n" ] }
27
import torch
tensor = torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) #result =
result = torch.mean(tensor)
{ "setup_code": "", "test_cases": [ "assert result.item() == 3.5\n" ] }
import tensorflow as tf
tensor = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) #result =
result = tf.reduce_mean(tensor).numpy()
{ "setup_code": "", "test_cases": [ "assert result == 3.5\n" ] }
28
import torch
tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) #result =
result = torch.prod(tensor)
{ "setup_code": "", "test_cases": [ "assert result.item() == 720\n" ] }
import tensorflow as tf
tensor = tf.constant([[1, 2, 3], [4, 5, 6]]) #result =
result = tf.reduce_prod(tensor).numpy()
{ "setup_code": "", "test_cases": [ "assert result == 720\n" ] }
29
import torch
tensor = torch.tensor([1, 2, 1, 2, 1, 2]) #result =
result = torch.unique(tensor)
{ "setup_code": "", "test_cases": [ "assert torch.equal(result, torch.tensor([1, 2]))\n" ] }
import tensorflow as tf import numpy as np
tensor = tf.constant([1, 2, 1, 2, 1, 2]) #result =
result = tf.unique(tensor).y.numpy()
{ "setup_code": "", "test_cases": [ "assert np.array_equal(result, [1, 2])\n" ] }
30
import torch import torch.nn as nn
a = torch.tensor([1, 0, 1]) b = torch.tensor([1, 1, 0]) #result =
result = torch.bitwise_xor(a, b)
{ "setup_code": "", "test_cases": [ "assert torch.equal(result, torch.tensor([0, 1, 1]))\n" ] }
import tensorflow as tf import numpy as np
a = tf.constant([1, 0, 1]) b = tf.constant([1, 1, 0]) #result =
result = tf.bitwise.bitwise_xor(a, b).numpy()
{ "setup_code": "", "test_cases": [ "assert np.array_equal(result, [0, 1, 1])\n" ] }
31
import torch import torch.nn as nn
angles = torch.tensor([1, 3.2, 4.5]) #result =
sines = torch.sin(angles)
{ "setup_code": "", "test_cases": [ "assert torch.allclose(sines, torch.tensor([ 0.8415, -0.0584, -0.9775]), atol=1e-4)\n" ] }
import tensorflow as tf import numpy as np
angles = tf.constant([1, 3.2, 4.5]) #result =
sines = tf.sin(angles)
{ "setup_code": "", "test_cases": [ "assert np.allclose(sines.numpy(), [ 0.8415, -0.0584, -0.9775], atol=1e-4)\n" ] }
32
import torch import torch.nn as nn
import torch # Define the model and save it to the variable 'model'. #model = TinyModel()
class TinyModel(torch.nn.Module): def __init__(self): super().__init__() self.linear1 = torch.nn.Linear(100, 200) self.activation = torch.nn.ReLU() self.linear2 = torch.nn.Linear(200, 10) self.softmax = torch.nn.Softmax() def forward(self, x): x = self.linear1(x) x = self.activation(x) x = self.linear2(x) x = self.softmax(x) return x model = TinyModel()
{ "setup_code": "", "test_cases": [ "assert isinstance(model, nn.Module) and model.linear1.out_features == 200\n", "assert model.linear2.out_features == 10 and isinstance(model.activation, nn.ReLU) and isinstance(model.softmax, nn.Softmax)" ] }
import tensorflow as tf
# Define the model and save it to the variable 'model'. #model =
model = tf.keras.models.Sequential([ tf.keras.layers.Dense(200, input_shape=(100,), activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ])
{ "setup_code": "", "test_cases": [ "assert model.layers[0].output.shape == (None, 200)\n", "assert model.layers[0].activation.__name__ == 'relu' and model.layers[1].output.shape == (None, 10) and model.layers[1].activation.__name__ == 'softmax'" ] }
33
import torch
class TinyModel(torch.nn.Module): def __init__(self): super().__init__() self.linear1 = torch.nn.Linear(100, 200) self.activation = torch.nn.ReLU() self.linear2 = torch.nn.Linear(200, 10) self.softmax = torch.nn.Softmax() def forward(self, x): x = self.linear1(x) x = self.activation(x) x = self.linear2(x) x = self.softmax(x) return x model = TinyModel() #total_params =
total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
{ "setup_code": "", "test_cases": [ "assert total_params == 22210, f'Expected 22210 parameters, got {total_params}'\n" ] }
import tensorflow as tf
model = tf.keras.models.Sequential([ tf.keras.layers.Dense(200, input_shape=(100,), activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) #total_params =
total_params = model.count_params()
{ "setup_code": "", "test_cases": [ "assert total_params == 22210, f'Expected 22210 parameters, got {total_params}'\n" ] }
34
import torch import torch.nn as nn
class TinyModel(torch.nn.Module): def __init__(self): super().__init__() self.linear1 = torch.nn.Linear(100, 200) self.activation = torch.nn.ReLU() self.linear2 = torch.nn.Linear(200, 10) self.softmax = torch.nn.Softmax() def forward(self, x): x = self.linear1(x) x = self.activation(x) x = self.linear2(x) x = self.softmax(x) return x model = TinyModel() #first_layer_params =
first_layer_params = model.linear1.weight.numel() + model.linear1.bias.numel()
{ "setup_code": "", "test_cases": [ "assert first_layer_params == 20200, f'Expected 20200 parameters in the first layer, got {first_layer_params}'\n" ] }
import tensorflow as tf
model = tf.keras.models.Sequential([ tf.keras.layers.Dense(200, input_shape=(100,), activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) #first_layer_params =
first_layer_params = model.layers[0].count_params()
{ "setup_code": "", "test_cases": [ "assert first_layer_params == 20200, f'Expected 20200 parameters in the first layer, got {first_layer_params}'\n" ] }
35
import torch import torch.nn as nn
tensor = torch.tensor([[[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]]], dtype=torch.float32) #output =
pool = nn.MaxPool2d(2, stride=2) output = pool(tensor)
{ "setup_code": "", "test_cases": [ "assert torch.equal(output, torch.tensor([[[[6, 8], [14, 16]]]]))\n" ] }
import tensorflow as tf
tensor = tf.constant([[[[1], [2], [3], [4]], [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]], dtype=tf.float32) #output =
output = tf.nn.max_pool2d(tensor, ksize=2, strides=2, padding='VALID')
{ "setup_code": "", "test_cases": [ "assert tf.reduce_all(tf.equal(output, tf.constant([[[[6], [8]], [[14], [16]]]], dtype=tf.float32))).numpy(), 'Output did not match expected'\n" ] }
36
import torch import torch.nn as nn
tensor = torch.tensor([[1.0, 2.0, 3.0, 4.0, 5.0], [2.0, 3.0, 4.0, 5.0, 6.0]], dtype=torch.float32) # Increased batch size #output =
bn_layer = nn.BatchNorm1d(num_features=5) output = bn_layer(tensor)
{ "setup_code": "", "test_cases": [ "assert torch.allclose(output, torch.tensor([[-1.0000, -1.0000, -1.0000, -1.0000, -1.0000], [ 1.0000, 1.0000, 1.0000, 1.0000, 1.0000]]), atol=0.0001), 'Output did not match expected values.'\n" ] }
import tensorflow as tf
tensor = tf.constant([[1.0, 2.0, 3.0, 4.0, 5.0], [2.0, 3.0, 4.0, 5.0, 6.0]], dtype=tf.float32) # Already has batch dimension #output =
bn_layer = tf.keras.layers.BatchNormalization(axis=1) output = bn_layer(tensor, training=True)
{ "setup_code": "", "test_cases": [ "assert tf.experimental.numpy.allclose(output, tf.constant([[-0.9980061, -0.99800587, -0.99800587, -0.99800587, -0.99800587], [0.99800587, 0.99800634, 0.99800587, 0.99800587, 0.99800587]], dtype=tf.float32), atol=1e-5), 'Output did not match expected values.'\n" ] }
37
import torch import torch.nn as nn
torch.manual_seed(0) tensor = torch.tensor([1.0, 2.0, 3.0, 4.0, 5.0], requires_grad=True) #output =
dropout = nn.Dropout(p=0.5) output = dropout(tensor)
{ "setup_code": "", "test_cases": [ "expected_output = torch.tensor([0., 0., 6., 0., 0.])\nassert torch.equal(output, expected_output), 'Output does not match expected tensor'\n" ] }
import tensorflow as tf
tensor = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0]) #You have to set the seed to 0 in dropout #output =
dropout = tf.keras.layers.Dropout(rate=0.5,seed=0) output = dropout(tensor, training=True)
{ "setup_code": "", "test_cases": [ "expected_output = tf.constant([0. ,4., 6. ,8., 0.], dtype=tf.float32)\nassert tf.reduce_all(tf.equal(output, expected_output)).numpy(), 'Output does not match expected tensor'\n" ] }
38
import torch import torch.nn as nn import torch.optim as optim
class TinyModel(torch.nn.Module): def __init__(self): super().__init__() self.linear1 = torch.nn.Linear(100, 200) self.activation = torch.nn.ReLU() self.linear2 = torch.nn.Linear(200, 10) self.softmax = torch.nn.Softmax() def forward(self, x): x = self.linear1(x) x = self.activation(x) x = self.linear2(x) x = self.softmax(x) return x model = TinyModel() #optimizer =
optimizer = optim.SGD(model.parameters(), lr=0.0001)
{ "setup_code": "", "test_cases": [ "assert isinstance(optimizer, optim.SGD) and optimizer.param_groups[0]['lr'] == 0.0001, f'Incorrect optimizer configuration'\n" ] }
import tensorflow as tf import numpy as np
model = tf.keras.models.Sequential([ tf.keras.layers.Dense(200, input_shape=(100,), activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) #optimizer =
optimizer = tf.keras.optimizers.SGD(learning_rate=0.0001)
{ "setup_code": "", "test_cases": [ "assert isinstance(optimizer, tf.keras.optimizers.SGD) and np.isclose(optimizer.learning_rate.numpy(), 0.0001), f'Incorrect optimizer configuration; expected learning rate 0.0001, got {optimizer.learning_rate.numpy()}'\n" ] }
39
import torch import torch.nn as nn
input_tensor = torch.tensor([2.7, 4.2, 3.6, 9.8], requires_grad=True) target_tensor = torch.tensor([1., 3., 5., 7.]) #loss =
mse_loss = nn.MSELoss() loss = mse_loss(input_tensor, target_tensor)
{ "setup_code": "expected_loss = 3.5325\n", "test_cases": [ "assert torch.isclose(loss, torch.tensor(expected_loss)), f'Calculated loss {loss.item()} does not match expected loss {expected_loss}'\n" ] }
import tensorflow as tf
input_tensor = tf.constant([2.7, 4.2, 3.6, 9.8]) target_tensor = tf.constant([1., 3., 5., 7.]) #loss =
mse_loss = tf.keras.losses.MeanSquaredError() loss = mse_loss(target_tensor, input_tensor)
{ "setup_code": "expected_loss = 3.5325\n", "test_cases": [ "assert tf.experimental.numpy.isclose(loss, expected_loss, atol=1e-6), f'Calculated loss {loss.numpy()} does not match expected loss {expected_loss}'\n" ] }
40
import torch import torch.nn as nn import torch.optim as optim
torch.manual_seed(0) class TinyModel(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(100, 200) self.activation = nn.ReLU() self.linear2 = nn.Linear(200, 10) self.softmax = nn.Softmax(dim=1) def forward(self, x): x = self.linear1(x) x = self.activation(x) x = self.linear2(x) x = self.softmax(x) return x model = TinyModel() criterion = nn.MSELoss() optimizer = optim.SGD(model.parameters(), lr=0.01) input_tensor = torch.randn(10, 100) # Batch size of 10 target = torch.randn(10, 10) # Random target values for MSE calculation # loss =
optimizer.zero_grad() output = model(input_tensor) loss = criterion(output, target) loss.backward() optimizer.step()
{ "setup_code": "expected_loss = 1.2815496921539307\n", "test_cases": [ "assert torch.isclose(loss,torch.tensor(expected_loss))\n" ] }
import tensorflow as tf import numpy as np
tf.random.set_seed(0) np.random.seed(0) model = tf.keras.models.Sequential([ tf.keras.layers.Dense(200, input_shape=(100,), activation='relu', kernel_initializer=tf.keras.initializers.GlorotUniform(seed=0)), tf.keras.layers.Dense(10, activation='softmax',kernel_initializer=tf.keras.initializers.GlorotUniform(seed=0)) ]) model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.01), loss='mean_squared_error') input_tensor = tf.random.normal((10, 100),seed=0) # Batch size of 10 target = tf.random.normal((10, 10),seed=0) # Random target values for MSE calculation # loss =
loss = model.train_on_batch(input_tensor, target)
{ "setup_code": "expected_loss = 0.9032668\n", "test_cases": [ "assert tf.experimental.numpy.isclose(loss, expected_loss)\n" ] }
41
import torch import torch.nn as nn
torch.manual_seed(0) input_tensor = torch.randn(4, requires_grad=True) target_tensor = torch.randn(4) #loss =
mae_loss = nn.L1Loss() loss = mae_loss(input_tensor, target_tensor)
{ "setup_code": "expected_loss = 1.6456040143966675\n", "test_cases": [ "assert torch.isclose(loss, torch.tensor(expected_loss), atol=1e-4), f'Calculated loss {loss.item()} does not match expected loss {expected_loss}'\n" ] }
import tensorflow as tf
tf.random.set_seed(0) input_tensor = tf.random.normal([4],seed=0) target_tensor = tf.random.normal([4],seed=0) #loss =
mae_loss = tf.keras.losses.MeanAbsoluteError() loss = mae_loss(target_tensor, input_tensor)
{ "setup_code": "expected_loss = 1.902283787727356\n", "test_cases": [ "assert tf.experimental.numpy.isclose(loss, expected_loss, atol=1e-4), f'Calculated loss {loss.numpy()} does not match expected loss {expected_loss}'\n" ] }
42
import torch import torch.nn as nn
torch.manual_seed(0) input_tensor =torch.randn(7, requires_grad=True) target_tensor = torch.randn(7, requires_grad=True) #loss =
hinge_loss = nn.HingeEmbeddingLoss() loss = hinge_loss(input_tensor.float(), target_tensor.float())
{ "setup_code": "expected_loss = 1.0772851705551147\n", "test_cases": [ "assert torch.isclose(loss, torch.tensor(expected_loss), atol=1e-4), f'Calculated loss {loss.item()} does not match expected loss {expected_loss}'\n" ] }
import tensorflow as tf
tf.random.set_seed(0) input_tensor = tf.random.normal([7],seed=0) target_tensor = tf.random.normal([7],seed=0) #loss =
hinge_loss = tf.keras.losses.Hinge() loss = hinge_loss(target_tensor, input_tensor)
{ "setup_code": "expected_loss = 1.2223261594772339\n", "test_cases": [ "assert tf.experimental.numpy.isclose(loss, expected_loss, atol=1e-4), f'Calculated loss {loss.numpy()} does not match expected loss {expected_loss}'\n" ] }
43
import torch import torch.nn as nn
torch.manual_seed(0) input_tensor = torch.randn(5, requires_grad=True) target_tensor = torch.randn(5) #loss =
huber_loss = nn.HuberLoss() loss = huber_loss(input_tensor, target_tensor)
{ "setup_code": "expected_loss = 1.2437692880630493\n", "test_cases": [ "assert torch.isclose(loss, torch.tensor(expected_loss), atol=1e-4), f'Calculated loss {loss.item()} does not match expected loss {expected_loss}'\n" ] }
import tensorflow as tf
tf.random.set_seed(0) input_tensor = tf.random.normal([5],seed=0) target_tensor = tf.random.normal([5],seed=0) #loss =
huber_loss = tf.keras.losses.Huber() loss = huber_loss(target_tensor, input_tensor)
{ "setup_code": "expected_loss = 0.7624791860580444\n", "test_cases": [ "assert tf.experimental.numpy.isclose(loss, expected_loss, atol=1e-4), f'Calculated loss {loss.numpy()} does not match expected loss {expected_loss}'\n" ] }
44
import torch import torch.nn as nn
# model =
model = nn.Sequential( nn.Conv2d(1, 20, 5), nn.ReLU(), nn.Conv2d(20, 64, 5), nn.ReLU() )
{ "setup_code": "\n", "test_cases": [ "assert isinstance(model, nn.Sequential), 'Model is not an instance of nn.Sequential'\n", "assert len(model) == 4, 'Model does not contain the correct number of layers'\n", "assert isinstance(model[0], nn.Conv2d) and model[0].in_channels == 1 and model[0].out_channels == 20, 'First layer specifications are incorrect'\n", "assert isinstance(model[1], nn.ReLU), 'Second layer should be ReLU activation'\nassert isinstance(model[2], nn.Conv2d) and model[2].in_channels == 20 and model[2].out_channels == 64, 'Third layer specifications are incorrect'\n", "assert isinstance(model[3], nn.ReLU), 'Fourth layer should be ReLU activation'\n" ] }
import tensorflow as tf
model = tf.keras.Sequential([ tf.keras.layers.Conv2D(20, (5, 5), input_shape=(None, None, 1), padding='valid'), tf.keras.layers.ReLU(), tf.keras.layers.Conv2D(64, (5, 5), padding='valid'), tf.keras.layers.ReLU() ])
model = tf.keras.Sequential([ tf.keras.layers.Conv2D(20, (5, 5), input_shape=(None, None, 1), padding='valid'), tf.keras.layers.ReLU(), tf.keras.layers.Conv2D(64, (5, 5), padding='valid'), tf.keras.layers.ReLU() ])
{ "setup_code": "", "test_cases": [ "assert isinstance(model, tf.keras.Sequential), 'Model is not an instance of tf.keras.Sequential'\n", "assert len(model.layers) == 4, 'Model does not contain the correct number of layers'\n", "assert isinstance(model.layers[0], tf.keras.layers.Conv2D) and model.layers[0].filters == 20 and model.layers[0].kernel_size == (5, 5), 'First layer specifications are incorrect'\n", "assert isinstance(model.layers[1], tf.keras.layers.ReLU), 'Second layer should be ReLU activation'\n", "assert isinstance(model.layers[2], tf.keras.layers.Conv2D) and model.layers[2].filters == 64 and model.layers[2].kernel_size == (5, 5), 'Third layer specifications are incorrect'\nassert isinstance(model.layers[3], tf.keras.layers.ReLU), 'Fourth layer should be ReLU activation'\n" ] }
45
import torch import torch.nn as nn
torch.manual_seed(0) class TinyModel(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(100, 200) self.activation = nn.ReLU() self.linear2 = nn.Linear(200, 10) self.softmax = nn.Softmax(dim=1) def forward(self, x): x = self.linear1(x) x = self.activation(x) x = self.linear2(x) x = self.softmax(x) return x model = TinyModel() # set device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device)
{ "setup_code": "\n", "test_cases": [ "assert next(model.parameters()).is_cuda == True, 'Model is not on CUDA device; it is on {}'.format(next(model.parameters()).device)\n" ] }
import tensorflow as tf
tf.config.set_soft_device_placement(True) # Enable automatic device placement tf.debugging.set_log_device_placement(True) # Log device placement for debugging model = tf.keras.models.Sequential([ tf.keras.layers.Dense(200, input_shape=(100,), activation='relu', kernel_initializer=tf.keras.initializers.GlorotUniform(seed=0)), tf.keras.layers.Dense(10, activation='softmax',kernel_initializer=tf.keras.initializers.GlorotUniform(seed=0)) ]) # set device
device = '/gpu:0' if tf.config.list_physical_devices('GPU') else '/cpu:0' with tf.device(device): model = tf.keras.models.Sequential([ tf.keras.layers.Dense(200, input_shape=(100,), activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ])
{ "setup_code": "\ndummy_input = tf.random.normal([1, 100])\noutput = model(dummy_input)\n", "test_cases": [ "gpu_available = tf.config.list_physical_devices('GPU')\nop_device = output.device\nassert ('gpu' in op_device.lower() and gpu_available) or ('cpu' in op_device.lower() and not gpu_available), 'Weight {} not on device {}'.format(weight.name, device)\n" ] }
46
import torch import torch.nn as nn
model = nn.Sequential( nn.Conv2d(1, 20, 5), nn.ReLU(), nn.Conv2d(20, 64, 5), nn.ReLU() ) # Save the model with the name 'seq_model.pth'
torch.save(model, 'seq_model.pth')
{ "setup_code": "\n", "test_cases": [ "import os\nassert os.path.exists('seq_model.pth'), 'Model file not found after save operation'" ] }
import tensorflow as tf
model = tf.keras.Sequential([ tf.keras.layers.Conv2D(20, (5, 5), input_shape=(None, None, 1), padding='valid'), tf.keras.layers.ReLU(), tf.keras.layers.Conv2D(64, (5, 5), padding='valid'), tf.keras.layers.ReLU() ]) # Save the model with the name 'seq_model.keras'
model.save('seq_model.keras')
{ "setup_code": "", "test_cases": [ "import os\nassert os.path.exists('seq_model.keras'), 'Model file not found after save operation'" ] }
47
import torch import torch.nn as nn
model = nn.Sequential( nn.Conv2d(1, 20, 5), nn.ReLU(), nn.Conv2d(20, 64, 5), nn.ReLU() ) torch.save(model, 'seq_model.pth') # Load the model
loaded_model = torch.load('seq_model.pth')
{ "setup_code": "\n", "test_cases": [ "\nassert isinstance(loaded_model, nn.Sequential), 'Loaded model is not an instance of nn.Sequential'\n", "assert len(loaded_model) == 4, 'Model does not contain the correct number of layers'\n", "assert isinstance(loaded_model[0], nn.Conv2d) and loaded_model[0].in_channels == 1 and loaded_model[0].out_channels == 20, 'First Conv2d layer parameters are incorrect'\n", "assert isinstance(loaded_model[1], nn.ReLU), 'Second layer should be ReLU activation'\nassert isinstance(loaded_model[2], nn.Conv2d) and loaded_model[2].in_channels == 20 and loaded_model[2].out_channels == 64, 'Third Conv2d layer parameters are incorrect'\n", "assert isinstance(loaded_model[3], nn.ReLU), 'Fourth layer should be ReLU activation'" ] }
import tensorflow as tf
model = tf.keras.Sequential([ tf.keras.layers.Conv2D(20, (5, 5), input_shape=(None, None, 1), padding='valid'), tf.keras.layers.ReLU(), tf.keras.layers.Conv2D(64, (5, 5), padding='valid'), tf.keras.layers.ReLU() ]) model.save('seq_model.keras') # Load the model
loaded_model = tf.keras.models.load_model('seq_model.keras')
{ "setup_code": "", "test_cases": [ "\nassert isinstance(loaded_model, tf.keras.Sequential), 'Loaded model is not an instance of tf.keras.Sequential'\n", "assert len(loaded_model.layers) == 4, 'Model does not contain the correct number of layers'\n", "assert isinstance(loaded_model.layers[0], tf.keras.layers.Conv2D) and loaded_model.layers[0].filters == 20 and loaded_model.layers[0].kernel_size == (5, 5), 'First Conv2D layer parameters are incorrect'\n", "assert isinstance(loaded_model.layers[1], tf.keras.layers.ReLU), 'Second layer should be ReLU activation'\nassert isinstance(loaded_model.layers[2], tf.keras.layers.Conv2D) and loaded_model.layers[2].filters == 64 and loaded_model.layers[2].kernel_size == (5, 5), 'Third Conv2D layer parameters are incorrect'\n", "assert isinstance(loaded_model.layers[3], tf.keras.layers.ReLU), 'Fourth layer should be ReLU activation'\n " ] }
48
import torch import torch.nn as nn
# Define the model using Sequential # model =
model = nn.Sequential( nn.Embedding(num_embeddings=1000, embedding_dim=64), nn.LSTM(64, 128, batch_first=True), nn.Linear(128, 10) )
{ "setup_code": "\n", "test_cases": [ "\nassert isinstance(model[0], nn.Embedding) and model[0].num_embeddings == 1000 and model[0].embedding_dim == 64, 'Embedding layer configuration error'\n", "assert isinstance(model[1], nn.LSTM) and model[1].hidden_size == 128, 'LSTM layer configuration error'\n", "assert isinstance(model[2], nn.Linear) and model[2].out_features == 10, 'Dense layer configuration error'\n" ] }
import tensorflow as tf from tensorflow.keras import layers
# Define the model using Sequential # model =
model = tf.keras.Sequential([ layers.Embedding(input_dim=1000, output_dim=64), layers.LSTM(128), layers.Dense(10) ])
{ "setup_code": "", "test_cases": [ "\nassert isinstance(model.layers[0], layers.Embedding) and model.layers[0].input_dim == 1000 and model.layers[0].output_dim == 64, 'Embedding layer configuration error'\n ", "assert isinstance(model.layers[1], layers.LSTM) and model.layers[1].units == 128, 'LSTM layer configuration error'\n", "assert isinstance(model.layers[2], layers.Dense) and model.layers[2].units == 10, 'Dense layer configuration error'\n" ] }
49
import torch import torch.nn as nn
# Define the model using Sequential # model =
model = nn.Sequential( nn.LSTM(input_size=10, hidden_size=64, batch_first=True, bidirectional=True), nn.LSTM(input_size=128, hidden_size=32, batch_first=True, bidirectional=True), # Input size doubles due to bidirectionality nn.Linear(64, 10) # Output from the second LSTM is doubled due to bidirectionality )
{ "setup_code": "\n", "test_cases": [ "\nassert isinstance(model[0], nn.LSTM) and model[0].hidden_size == 64 and model[0].bidirectional, 'First LSTM layer configuration error'\n", "assert isinstance(model[1], nn.LSTM) and model[1].hidden_size == 32 and model[1].bidirectional, 'Second LSTM layer configuration error'\nassert isinstance(model[2], nn.Linear) and model[2].out_features == 10, 'Dense layer configuration error'\n " ] }
import tensorflow as tf from tensorflow.keras import layers
# Define the model using Sequential # model =
model = tf.keras.Sequential([ layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10)), layers.Bidirectional(layers.LSTM(32)), layers.Dense(10) ])
{ "setup_code": "\ndummy_input = tf.random.normal([32, 5, 10])\nfirst_output = model.layers[0](dummy_input)\nsecond_output = model.layers[1](first_output)\n", "test_cases": [ "assert isinstance(model.layers[0], layers.Bidirectional) and first_output.shape == (32, 5, 128), 'First Bidirectional LSTM layer configuration error'\nassert isinstance(model.layers[1], layers.Bidirectional) and second_output.shape == (32,64), 'Second Bidirectional LSTM layer configuration error'\n", "assert isinstance(model.layers[2], layers.Dense) and model.layers[2].units == 10, 'Dense layer configuration error'\n" ] }
50
import torch import torch.nn.functional as F import numpy as np
torch.manual_seed(0) tensor1 = torch.randn(10, requires_grad=True) tensor2 = torch.randn(10, requires_grad=True) # Calculate cosine similarity # cosine_similarity =
cosine_similarity = F.cosine_similarity(tensor1, tensor2, dim=0)
{ "setup_code": "\nexpected_value = 0.41493287682533264\n", "test_cases": [ "assert np.isclose(cosine_similarity.item(), expected_value, atol=1e-5), 'Cosine similarity calculation does not match expected value'\n" ] }
import tensorflow as tf
tf.random.set_seed(0) tensor1 = tf.random.normal([10],seed=0) tensor2 = tf.random.normal([10],seed=0) # Calculate cosine similarity # cosine_similarity =
cosine_similarity = tf.keras.losses.cosine_similarity(tensor1, tensor2)
{ "setup_code": "\nexpected_value = -0.25341374\n", "test_cases": [ "assert tf.experimental.numpy.isclose(cosine_similarity.numpy(), expected_value, atol=1e-5), 'Cosine similarity calculation does not match expected value'\n" ] }
51
import torch import torch.nn.functional as F import numpy as np
torch.manual_seed(0) tensor1 = torch.randn(10, requires_grad=True) tensor2 = torch.randn(10, requires_grad=True) # Calculate Euclidean distance # euclidean_distance =
euclidean_distance = torch.dist(tensor1, tensor2)
{ "setup_code": "\nexpected_value = 3.3985581398010254\n", "test_cases": [ "assert np.isclose(euclidean_distance.item(), expected_value, atol=1e-5), 'Euclidean distance calculation does not match expected value'\n" ] }
import tensorflow as tf
tf.random.set_seed(0) tensor1 = tf.random.normal([10], seed=0) tensor2 = tf.random.normal([10], seed=0) # Calculate Euclidean distance # euclidean_distance =
euclidean_distance = tf.norm(tensor1 - tensor2)
{ "setup_code": "\nexpected_value = 4.275403\n", "test_cases": [ "assert tf.experimental.numpy.isclose(euclidean_distance.numpy(), expected_value, atol=1e-5), 'Euclidean distance calculation does not match expected value'" ] }
52
import torch import torch.nn as nn
torch.manual_seed(1) word_to_ix = {"hello": 0, "world": 1} embeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings lookup_tensor = torch.tensor([word_to_ix["hello"]], dtype=torch.long) # hello_embed =
hello_embed = embeds(lookup_tensor)
{ "setup_code": "", "test_cases": [ "assert hello_embed.shape == (1, 5), 'Shape of hello_embed tensor is incorrect'", "expected_values = torch.tensor([[ 0.6614, 0.2669, 0.0617, 0.6213, -0.4519]])\nassert torch.allclose(hello_embed, expected_values, atol=1e-4), 'Values of hello_embed tensor are incorrect'" ] }
import tensorflow as tf
import tensorflow as tf import tensorflow.keras as keras tf.random.set_seed(1) word_to_ix = {"hello": 0, "world": 1} embeds = tf.keras.layers.Embedding(input_dim=2, output_dim=5, embeddings_initializer=keras.initializers.RandomNormal(seed=1)) lookup_tensor = tf.constant([word_to_ix["hello"]]) hello_embed = embeds(lookup_tensor) hello_embed, lookup_tensor
hello_embed = embeds(lookup_tensor)
{ "setup_code": "tf.random.set_seed(1)", "test_cases": [ "assert hello_embed.shape == (1, 5), 'Shape of hello_embed tensor is incorrect'", "expected_values = tf.constant([[0.00633252, -0.02465083, 0.03155954, -0.03944233, 0.02841545]], dtype=tf.float32)\nassert tf.reduce_all(tf.abs(hello_embed - expected_values) < 1e-4), 'Values of hello_embed tensor are incorrect'" ] }
53
import torch import math
# def scaled_dot_product_attention(Q, K, V, mask=None): # result =
def scaled_dot_product_attention(Q, K, V, mask=None): attn_scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(Q.size(-1)) if mask is not None: attn_scores = attn_scores.masked_fill(mask == 0, -1e9) attn_probs = torch.softmax(attn_scores, dim=-1) output = torch.matmul(attn_probs, V) return output # result = scaled_dot_product_attention(Q, K, V, mask)
{ "setup_code": "import math\nQ = torch.rand(5, 10, 20)\nK = torch.rand(5, 10, 20)\nV = torch.rand(5, 10, 20)\nmask = torch.randint(0, 2, (5, 10, 10))\n", "test_cases": [ "assert scaled_dot_product_attention(Q, K, V, mask).shape == torch.Size([5, 10, 20])", "assert scaled_dot_product_attention(Q, K, V).shape == torch.Size([5, 10, 20])" ] }
import tensorflow as tf import math
# def scaled_dot_product_attention(Q, K, V, mask=None): # result =
def scaled_dot_product_attention(Q, K, V, mask=None): matmul_qk = tf.matmul(Q, K, transpose_b=True) depth = tf.cast(tf.shape(K)[-1], tf.float32) logits = matmul_qk / tf.math.sqrt(depth) if mask is not None: mask = tf.cast(mask, tf.float32) logits += (mask * -1e9) attention_weights = tf.nn.softmax(logits, axis=-1) output = tf.matmul(attention_weights, V) return output # result = scaled_dot_product_attention(Q, K, V, mask)
{ "setup_code": "import tensorflow as tf\nimport math\nQ = tf.random.uniform((5, 10, 20))\nK = tf.random.uniform((5, 10, 20))\nV = tf.random.uniform((5, 10, 20))\nmask = tf.random.uniform((5, 10, 10), maxval=2, dtype=tf.int32)", "test_cases": [ "assert scaled_dot_product_attention(Q, K, V, mask).shape == (5, 10, 20)", "assert scaled_dot_product_attention(Q, K, V).shape == (5, 10, 20)" ] }
54
import torch import torch.nn as nn
# def split_heads(num_heads, d_k, x):
def split_heads(num_heads, d_k, x): batch_size, seq_length, d_model = x.size() return x.view(batch_size, seq_length, num_heads, d_k).transpose(1, 2) # result = split_heads(num_heads, d_k, x)
{ "setup_code": "\nimport torch\nnum_heads = 2\nd_k = 64\nx = torch.rand(32, 10, 128)\n", "test_cases": [ "assert split_heads(num_heads, d_k, x).shape == (32, 2, 10, 64)" ] }
import tensorflow as tf
# def split_heads(num_heads, d_k, x):
def split_heads(num_heads, d_k, x): batch_size, seq_length, d_model = x.shape x = tf.reshape(x, (batch_size, seq_length, num_heads, d_k)) return tf.transpose(x, perm=[0, 2, 1, 3]) # result = split_heads(num_heads, d_k, x)
{ "setup_code": "import tensorflow as tf\nnum_heads = 2\nd_k = 64\nx = tf.random.uniform((32, 10, 128))", "test_cases": [ "assert split_heads(num_heads, d_k, x).shape == (32, 2, 10, 64)" ] }
55
import torch import torch.nn as nn
# Define the combine_heads function # def combine_heads(d_model, x): # return
def combine_heads(d_model, x): batch_size, _, seq_length, d_k = x.size() return x.transpose(1, 2).contiguous().view(batch_size, seq_length, d_model)
{ "setup_code": "", "test_cases": [ "x = torch.randn(2, 8, 10, 64)\nd_model = 8 * 64\nresult = combine_heads(d_model, x)\nassert result.shape == (2, 10, 512)", "x = torch.randn(3, 4, 5, 32)\nd_model = 4 * 32\nresult = combine_heads(d_model, x)\nassert result.shape == (3, 5, 128)", "x = torch.randn(1, 2, 3, 16)\nd_model = 2 * 16\nresult = combine_heads(d_model, x)\nassert result.shape == (1, 3, 32)" ] }
import tensorflow as tf
# Define the combine_heads function # def combine_heads(d_model, x): # return
def combine_heads(d_model, x): batch_size, num_heads, seq_length, depth = tf.shape(x) x = tf.transpose(x, perm=[0, 2, 1, 3]) return tf.reshape(x, (batch_size, seq_length, d_model))
{ "setup_code": "", "test_cases": [ "x = tf.random.normal((2, 8, 10, 64))\nd_model = 8 * 64\nresult = combine_heads(d_model, x)\nassert result.shape == (2, 10, 512)", "x = tf.random.normal((3, 4, 5, 32))\nd_model = 4 * 32\nresult = combine_heads(d_model, x)\nassert result.shape == (3, 5, 128)", "x = tf.random.normal((1, 2, 3, 16))\nd_model = 2 * 16\nresult = combine_heads(d_model, x)\nassert result.shape == (1, 3, 32)" ] }
56
import torch import torch.nn as nn import math
class MultiHeadAttention(nn.Module): def __init__(self, d_model, num_heads): super().__init__() assert d_model % num_heads == 0, "d_model must be divisible by num_heads" self.d_model = d_model self.num_heads = num_heads self.d_k = d_model // num_heads self.W_q = nn.Linear(d_model, d_model) self.W_k = nn.Linear(d_model, d_model) self.W_v = nn.Linear(d_model, d_model) self.W_o = nn.Linear(d_model, d_model) def scaled_dot_product_attention(self, Q, K, V, mask=None): attn_scores = torch.matmul( Q, K.transpose(-2, -1)) / math.sqrt(self.d_k) if mask is not None: attn_scores = attn_scores.masked_fill(mask == 0, -1e9) attn_probs = torch.softmax(attn_scores, dim=-1) output = torch.matmul(attn_probs, V) return output def split_heads(self, x): batch_size, seq_length, d_model = x.size() return x.view(batch_size, seq_length, self.num_heads, self.d_k).transpose(1, 2) def combine_heads(self, x): batch_size, _, seq_length, d_k = x.size() return x.transpose(1, 2).contiguous().view(batch_size, seq_length, self.d_model) # def forward(self, Q, K, V, mask=None): # model = MultiHeadAttention(d_model, num_heads)
class MultiHeadAttention(nn.Module): def __init__(self, d_model, num_heads): super().__init__() assert d_model % num_heads == 0, "d_model must be divisible by num_heads" self.d_model = d_model self.num_heads = num_heads self.d_k = d_model // num_heads self.W_q = nn.Linear(d_model, d_model) self.W_k = nn.Linear(d_model, d_model) self.W_v = nn.Linear(d_model, d_model) self.W_o = nn.Linear(d_model, d_model) def scaled_dot_product_attention(self, Q, K, V, mask=None): attn_scores = torch.matmul( Q, K.transpose(-2, -1)) / math.sqrt(self.d_k) if mask is not None: attn_scores = attn_scores.masked_fill(mask == 0, -1e9) attn_probs = torch.softmax(attn_scores, dim=-1) output = torch.matmul(attn_probs, V) return output def split_heads(self, x): batch_size, seq_length, d_model = x.size() return x.view(batch_size, seq_length, self.num_heads, self.d_k).transpose(1, 2) def combine_heads(self, x): batch_size, _, seq_length, d_k = x.size() return x.transpose(1, 2).contiguous().view(batch_size, seq_length, self.d_model) def forward(self, Q, K, V, mask=None): Q = self.split_heads(self.W_q(Q)) K = self.split_heads(self.W_k(K)) V = self.split_heads(self.W_v(V)) attn_output = self.scaled_dot_product_attention(Q, K, V, mask) output = self.W_o(self.combine_heads(attn_output)) return output # model = MultiHeadAttention(d_model, num_heads)
{ "setup_code": "\nd_model = 512\nnum_heads = 8\nmodel = MultiHeadAttention(d_model, num_heads)\n", "test_cases": [ "assert isinstance(model, nn.Module)", "Q, K, V = torch.rand(5, 10, 512), torch.rand(5, 10, 512), torch.rand(5, 10, 512)\noutput = model(Q, K, V)\nassert output.shape == (5, 10, 512)", "mask = torch.zeros(5, 10, 10).unsqueeze(1).repeat(1, 8, 1, 1)\noutput = model(Q, K, V, mask)\nassert output.shape == (5, 10, 512)" ] }
import tensorflow as tf import math
class MultiHeadAttention(tf.keras.layers.Layer): def __init__(self, d_model, num_heads): super(MultiHeadAttention, self).__init__() assert d_model % num_heads == 0, "d_model must be divisible by num_heads" self.num_heads = num_heads self.d_model = d_model self.d_k = d_model // num_heads self.W_q = tf.keras.layers.Dense(d_model) self.W_k = tf.keras.layers.Dense(d_model) self.W_v = tf.keras.layers.Dense(d_model) self.W_o = tf.keras.layers.Dense(d_model) def scaled_dot_product_attention(self, Q, K, V, mask=None): matmul_qk = tf.matmul(Q, K, transpose_b=True) depth = tf.cast(self.d_k, tf.float32) logits = matmul_qk / tf.math.sqrt(depth) if mask is not None: logits += (mask * -1e9) attention_weights = tf.nn.softmax(logits, axis=-1) output = tf.matmul(attention_weights, V) return output def split_heads(self, x): batch_size = tf.shape(x)[0] x = tf.reshape(x, (batch_size, -1, self.num_heads, self.d_k)) return tf.transpose(x, perm=[0, 2, 1, 3]) def combine_heads(self, x): batch_size = tf.shape(x)[0] x = tf.transpose(x, perm=[0, 2, 1, 3]) return tf.reshape(x, (batch_size, -1, self.d_model)) # def call(self, Q, K, V, mask=None):
class MultiHeadAttention(tf.keras.layers.Layer): def __init__(self, d_model, num_heads): super().__init__() assert d_model % num_heads == 0, "d_model must be divisible by num_heads" self.num_heads = num_heads self.d_model = d_model self.d_k = d_model // num_heads self.W_q = tf.keras.layers.Dense(d_model) self.W_k = tf.keras.layers.Dense(d_model) self.W_v = tf.keras.layers.Dense(d_model) self.W_o = tf.keras.layers.Dense(d_model) def scaled_dot_product_attention(self, Q, K, V, mask=None): matmul_qk = tf.matmul(Q, K, transpose_b=True) depth = tf.cast(self.d_k, tf.float32) logits = matmul_qk / tf.math.sqrt(depth) if mask is not None: logits += (mask * -1e9) attention_weights = tf.nn.softmax(logits, axis=-1) output = tf.matmul(attention_weights, V) return output def split_heads(self, x): batch_size = tf.shape(x)[0] x = tf.reshape(x, (batch_size, -1, self.num_heads, self.d_k)) return tf.transpose(x, perm=[0, 2, 1, 3]) def combine_heads(self, x): batch_size = tf.shape(x)[0] x = tf.transpose(x, perm=[0, 2, 1, 3]) return tf.reshape(x, (batch_size, -1, self.d_model)) def call(self, Q, K, V, mask=None): Q = self.split_heads(self.W_q(Q)) K = self.split_heads(self.W_k(K)) V = self.split_heads(self.W_v(V)) attn_output = self.scaled_dot_product_attention(Q, K, V, mask) attn_output = self.combine_heads(attn_output) output = self.W_o(attn_output) return output
{ "setup_code": "\nd_model = 512\nnum_heads = 8\nmodel = MultiHeadAttention(d_model, num_heads)", "test_cases": [ "assert isinstance(model, tf.keras.layers.Layer)", "Q, K, V = tf.random.uniform((5, 10, 512)), tf.random.uniform((5, 10, 512)), tf.random.uniform((5, 10, 512))\noutput = model(Q, K, V)\nassert output.shape == (5, 10, 512)", "mask = tf.zeros((5, 10, 10))\nmask = tf.expand_dims(mask, 1)\nmask = tf.tile(mask, [1, 8, 1, 1])\noutput = model(Q, K, V, mask)\nassert output.shape == (5, 10, 512)" ] }
57
import torch import torch.nn as nn
class PositionWiseFeedForward(nn.Module): def __init__(self, d_model, d_ff): super(PositionWiseFeedForward, self).__init__() self.fc1 = nn.Linear(d_model, d_ff) self.fc2 = nn.Linear(d_ff, d_model) self.relu = nn.ReLU() # def forward(self, x): # result =
class PositionWiseFeedForward(nn.Module): def __init__(self, d_model, d_ff): super().__init__() self.fc1 = nn.Linear(d_model, d_ff) self.fc2 = nn.Linear(d_ff, d_model) self.relu = nn.ReLU() def forward(self, x): return self.fc2(self.relu(self.fc1(x))) # model = PositionWiseFeedForward(d_model, d_ff)
{ "setup_code": "\nimport torch.nn as nn\nd_model = 512\nd_ff = 2048\nx = torch.rand(10, d_model)\nmodel = PositionWiseFeedForward(d_model, d_ff)\n", "test_cases": [ "assert model.fc1.in_features == d_model and model.fc1.out_features == d_ff, 'First linear layer configuration error'", "assert model.fc2.in_features == d_ff and model.fc2.out_features == d_model, 'Second linear layer configuration error'", "assert model.forward(x).shape == (10, d_model), 'Forward function output shape error'" ] }
import tensorflow as tf
class PositionWiseFeedForward(tf.keras.layers.Layer): def __init__(self, d_model, d_ff): super().__init__() self.fc1 = tf.keras.layers.Dense(d_ff, activation='relu') self.fc2 = tf.keras.layers.Dense(d_model) # def call(self, x): # model =
class PositionWiseFeedForward(tf.keras.layers.Layer): def __init__(self, d_model, d_ff): super().__init__() self.fc1 = tf.keras.layers.Dense(d_ff, activation='relu') self.fc2 = tf.keras.layers.Dense(d_model) def call(self, x): x = self.fc1(x) return self.fc2(x) # model = PositionWiseFeedForward(d_model, d_ff)
{ "setup_code": "import tensorflow as tf\nd_model = 512\nd_ff = 2048\nx = tf.random.uniform((10, d_model))\nmodel = PositionWiseFeedForward(d_model, d_ff)", "test_cases": [ "assert model.fc1.units == d_ff and model.fc1.activation == tf.keras.activations.relu, 'First Dense layer configuration error'", "assert model.fc2.units == d_model, 'Second Dense layer configuration error'\n", "assert model.call(x).shape == (10, d_model), 'Call function output shape error'" ] }
58
import torch import torch.nn as nn import math
class PositionalEncoding(nn.Module): def __init__(self, d_model, max_seq_length): super().__init__() # Initialize the positional encoding layer def forward(self, x): # Apply positional encoding to x # result = ... pass
class PositionalEncoding(nn.Module): def __init__(self, d_model, max_seq_length): super().__init__() pe = torch.zeros(max_seq_length, d_model) position = torch.arange(0, max_seq_length, dtype=torch.float).unsqueeze(1) div_term = torch.exp(torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model)) pe[:, 0::2] = torch.sin(position * div_term) pe[:, 1::2] = torch.cos(position * div_term) self.register_buffer('pe', pe.unsqueeze(0)) def forward(self, x): return x + self.pe[:, :x.size(1)] # result = PositionalEncoding(d_model, max_seq_length)
{ "setup_code": "d_model = 512\nmax_seq_length = 100\nx = torch.randn(32, 100, 512)", "test_cases": [ "pos_encoding = PositionalEncoding(d_model, max_seq_length)\nresult = pos_encoding(x)\nassert result.shape == x.shape, 'Output shape should match input shape'", "expected_result = x + pos_encoding.pe[:, :x.size(1)]\nassert torch.allclose(result, expected_result,atol=1e-6), \"The positional encodings are not added correctly.\"", "assert torch.all(result[:, :, 1::2] != x[:, :, 1::2]), 'Cosine encoding not applied correctly'" ] }
import tensorflow as tf import math
class PositionalEncoding(tf.keras.layers.Layer): def __init__(self, d_model, max_seq_length): super().__init__() # Initialize the positional encoding layer def call(self, x): # Apply positional encoding to x # result = ... pass
class PositionalEncoding(tf.keras.layers.Layer): def __init__(self, d_model, max_seq_length): super().__init__() position = tf.range(0, max_seq_length, dtype=tf.float32)[..., tf.newaxis] i = tf.range(0, d_model, 2, dtype=tf.float32) div_term = tf.exp(i * -(math.log(10000.0) / d_model)) pe = tf.concat([tf.sin(position * div_term), tf.cos(position * div_term)], axis=-1) pe = pe[tf.newaxis, ...] self.pe = tf.cast(pe, tf.float32) def call(self, x): return x + self.pe[:, :tf.shape(x)[1], :] # result = PositionalEncoding(d_model, max_seq_length)
{ "setup_code": "d_model = 512\nmax_seq_length = 100\nx = tf.random.uniform((32, 100, 512))", "test_cases": [ "pos_encoding = PositionalEncoding(d_model, max_seq_length)\nresult = pos_encoding(x)\nassert result.shape == x.shape, 'Output shape should match input shape'", "assert not tf.reduce_all(tf.equal(result[:, :, 0::2], x[:, :, 0::2])), 'Sine encoding not applied correctly'", "assert not tf.reduce_all(tf.equal(result[:, :, 1::2], x[:, :, 1::2])), 'Cosine encoding not applied correctly'" ] }
59
import torch import torch.nn as nn import torch.nn.functional as F
class SelfAttention(nn.Module): def __init__(self, input_dim): super().__init__() def forward(self, x): # x.shape (batch_size, seq_length, input_dim) #weighted = ... #return weighted pass # model = SelfAttention(input_dim)
class SelfAttention(nn.Module): def __init__(self, input_dim): super().__init__() self.input_dim = input_dim self.query = nn.Linear(input_dim, input_dim) # [batch_size, seq_length, input_dim] self.key = nn.Linear(input_dim, input_dim) # [batch_size, seq_length, input_dim] self.value = nn.Linear(input_dim, input_dim) self.softmax = nn.Softmax(dim=2) def forward(self, x): # x.shape (batch_size, seq_length, input_dim) queries = self.query(x) keys = self.key(x) values = self.value(x) scores = torch.bmm(queries, keys.transpose(1, 2))/(self.input_dim**0.5) attention = self.softmax(scores) weighted = torch.bmm(attention, values) return weighted # model = SelfAttention(input_dim)
{ "setup_code": "import torch\ninput_dim = 64\nmodel = SelfAttention(input_dim)\nx = torch.rand(10, 20, input_dim)", "test_cases": [ "assert model(x).shape == torch.Size([10, 20, input_dim]), 'Output shape is incorrect'", "assert isinstance(model.query, nn.Linear) and model.query.in_features == input_dim, 'Query layer configuration error'", "assert isinstance(model.key, nn.Linear) and model.key.in_features == input_dim, 'Key layer configuration error'" ] }
import tensorflow as tf
class SelfAttention(tf.keras.layers.Layer): def __init__(self, input_dim): super().__init__() def call(self, x): # x.shape (batch_size, seq_length, input_dim) #weighted = ... #return weighted pass # model = SelfAttention(input_dim)
class SelfAttention(tf.keras.layers.Layer): def __init__(self, input_dim): super().__init__() self.input_dim = input_dim self.query = tf.keras.layers.Dense(input_dim) self.key = tf.keras.layers.Dense(input_dim) self.value = tf.keras.layers.Dense(input_dim) def call(self, x): # x.shape (batch_size, seq_length, input_dim) queries = self.query(x) keys = self.key(x) values = self.value(x) scores = tf.matmul(queries, keys, transpose_b=True) / (self.input_dim ** 0.5) attention = tf.nn.softmax(scores, axis=-1) weighted = tf.matmul(attention, values) return weighted # model = SelfAttention(input_dim)
{ "setup_code": "import tensorflow as tf\ninput_dim = 64\nmodel = SelfAttention(input_dim)\nx = tf.random.uniform((10, 20, input_dim))", "test_cases": [ "assert model(x).shape == (10, 20, input_dim), 'Output shape is incorrect'", "assert isinstance(model.query, tf.keras.layers.Dense) and model.query.units == input_dim, 'Query layer configuration error'", "assert isinstance(model.key, tf.keras.layers.Dense) and model.key.units == input_dim, 'Key layer configuration error'" ] }
60
import torch
def softmax(x): pass
def softmax(x): exp_x = torch.exp(x - torch.max(x)) return exp_x / torch.sum(exp_x, dim=-1, keepdim=True)
{ "setup_code": "x = torch.tensor([1.0, 2.0, 3.0]).float()", "test_cases": [ "assert torch.allclose(softmax(x), torch.tensor([0.0900, 0.2447, 0.6652]), atol=1e-4), 'Test failed: Softmax output is not as expected'", "large_values = torch.tensor([1000.0, 1000.0, 1000.0]).float()\nassert torch.allclose(softmax(large_values), torch.tensor([0.3333, 0.3333, 0.3333]), atol=1e-4), 'Test failed: Softmax fails with large numbers'", "mixed_values = torch.tensor([0.0, -1.0, -3.0]).float()\nassert torch.allclose(softmax(mixed_values), torch.tensor([0.7054, 0.2595, 0.0351]), atol=1e-4), 'Test failed: Softmax fails with negative and zero values'" ] }
import tensorflow as tf
def softmax(x): pass
def softmax(x): exp_x = tf.exp(x - tf.reduce_max(x)) return exp_x / tf.reduce_sum(exp_x, axis=-1, keepdims=True)
{ "setup_code": "x = tf.constant([1.0, 2.0, 3.0], dtype=tf.float32)", "test_cases": [ "assert tf.experimental.numpy.allclose(softmax(x), tf.constant([0.09003057, 0.24472848, 0.66524094], dtype=tf.float32), atol=1e-4), 'Test failed: Softmax output is not as expected'", "large_values = tf.constant([1000.0, 1000.0, 1000.0], dtype=tf.float32)\nassert tf.experimental.numpy.allclose(softmax(large_values), tf.constant([0.3333, 0.3333, 0.3333], dtype=tf.float32), atol=1e-4), 'Test failed: Softmax fails with large numbers'", "mixed_values = tf.constant([0.0, -1.0, -3.0], dtype=tf.float32)\nassert tf.experimental.numpy.allclose(softmax(mixed_values), tf.constant([0.70538455,0.25949648,0.03511903], dtype=tf.float32), atol=1e-4), 'Test failed: Softmax fails with negative and zero values'" ] }
61
import torch import numpy as np
x_values = [i for i in range(11)] # x_train = .... y_values = [2*i + 1 for i in x_values] # y_train = ...
x_values = [i for i in range(11)] x_train = np.array(x_values, dtype=np.float32) x_train = x_train.reshape(-1, 1) x_train = torch.from_numpy(x_train) y_values = [2*i + 1 for i in x_values] y_train = np.array(y_values, dtype=np.float32) y_train = y_train.reshape(-1, 1) y_train = torch.from_numpy(y_train) # dataset = (x_train, y_train)
{ "setup_code": "", "test_cases": [ "assert x_train.shape == (11, 1), 'The shape of x_train should be (11, 1)'", "assert y_train.shape == (11, 1), 'The shape of y_train should be (11, 1)'", "assert torch.equal(x_train[5], torch.tensor([5.0])), 'The fifth element of x_train should be tensor([5.0])'" ] }
import tensorflow as tf import numpy as np
x_values = [i for i in range(11)] # x_train = .... y_values = [2*i + 1 for i in x_values] # y_train = ...
x_values = [i for i in range(11)] x_train = np.array(x_values, dtype=np.float32) x_train = x_train.reshape(-1, 1) x_train = tf.convert_to_tensor(x_train) y_values = [2*i + 1 for i in x_values] y_train = np.array(y_values, dtype=np.float32) y_train = y_train.reshape(-1, 1) y_train = tf.convert_to_tensor(y_train) # dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
{ "setup_code": "", "test_cases": [ "assert x_train.shape == (11, 1), 'The shape of x_train should be (11, 1)'", "assert y_train.shape == (11, 1), 'The shape of y_train should be (11, 1)'", "assert tf.reduce_all(tf.equal(x_train[5], tf.constant([5.0]))), 'The fifth element of x_train should be tf.constant([5.0])'" ] }
62
import torch from torch.autograd import Variable
class LinearRegression(torch.nn.Module): def __init__(self, inputSize, outputSize): super().__init__() # self.linear = def forward(self, x): pass # model = LinearRegression(inputSize, outputSize)
class LinearRegression(torch.nn.Module): def __init__(self, inputSize, outputSize): super().__init__() self.linear = torch.nn.Linear(inputSize, outputSize) def forward(self, x): out = self.linear(x) return out # model = LinearRegression(inputSize, outputSize)
{ "setup_code": "inputSize = 2\noutputSize = 1\nmodel = LinearRegression(inputSize, outputSize)\ninput_tensor = torch.tensor([[1.0, 2.0]])", "test_cases": [ "assert model.linear.in_features == inputSize, 'Incorrect input size'", "assert model.linear.out_features == outputSize, 'Incorrect output size'", "assert model(input_tensor).shape == (1, outputSize), 'Incorrect output shape from forward method'" ] }
import tensorflow as tf
class LinearRegression(tf.keras.Model): def __init__(self, inputSize, outputSize): super().__init__() # self.linear = def call(self, x): pass # model = LinearRegression(inputSize, outputSize)
class LinearRegression(tf.keras.Model): def __init__(self, inputSize, outputSize): super().__init__() self.linear = tf.keras.layers.Dense(outputSize, input_shape=(inputSize,)) def call(self, x): return self.linear(x) # model = LinearRegression(inputSize, outputSize)
{ "setup_code": "inputSize = 2\noutputSize = 1\nmodel = LinearRegression(inputSize, outputSize)\ninput_tensor = tf.constant([[1.0, 2.0]])\noutput_tensor = model(input_tensor)", "test_cases": [ "\noutput_tensor = model(input_tensor)\nassert output_tensor.shape == (1, 1), 'Output shape should be (1, 1)'\nassert model.linear.weights[0].shape == (2, 1), 'Weight shape should be (2, 1)'\n", "assert model.linear.units == outputSize, 'Incorrect output size'", "assert model(input_tensor).shape == (1, outputSize), 'Incorrect output shape from call method'" ] }
63
import torch from torch.autograd import Variable
class linearRegression(torch.nn.Module): def __init__(self, inputSize, outputSize): super().__init__() self.linear = torch.nn.Linear(inputSize, outputSize) def forward(self, x): out = self.linear(x) return out inputDim = 1 # takes variable 'x' outputDim = 1 # takes variable 'y' model = linearRegression(inputDim, outputDim) # learningRate = ... # optimizer = ...
learningRate = 0.01 optimizer = torch.optim.SGD(model.parameters(), lr=learningRate)
{ "setup_code": "", "test_cases": [ "assert isinstance(optimizer, torch.optim.SGD), 'Optimizer should be an instance of torch.optim.SGD'", "assert optimizer.param_groups[0]['lr'] == 0.01, 'Learning rate should be 0.01'", "assert len(optimizer.param_groups[0]['params']) == 2, 'Optimizer should have parameters for both weight and bias'" ] }
import tensorflow as tf
inputDim = 1 outputDim = 1 model = tf.keras.Sequential([ tf.keras.layers.Dense(outputDim, input_shape=(inputDim,)) ]) # learningRate = ... # optimizer = ...
learningRate = 0.01 optimizer = tf.keras.optimizers.SGD(learning_rate=learningRate)
{ "setup_code": "inputDim = 1\noutputDim = 1", "test_cases": [ "assert isinstance(optimizer, tf.keras.optimizers.SGD), 'Optimizer should be an instance of tf.keras.optimizers.SGD'", "assert optimizer.learning_rate == 0.01, 'Learning rate should be 0.01'", "assert hasattr(model.layers[0], 'kernel'), 'Model should have at least one layer with weights'" ] }
64
import torch import numpy as np from torch.autograd import Variable
x_values = [i for i in range(11)] x_train = np.array(x_values, dtype=np.float32) x_train = x_train.reshape(-1, 1) x_train = torch.from_numpy(x_train) y_values = [2*i + 1 for i in x_values] y_train = np.array(y_values, dtype=np.float32) y_train = y_train.reshape(-1, 1) y_train = torch.from_numpy(y_train) class linearRegression(torch.nn.Module): def __init__(self, inputSize, outputSize): super().__init__() self.linear = torch.nn.Linear(inputSize, outputSize) def forward(self, x): out = self.linear(x) return out inputDim = 1 outputDim = 1 model = linearRegression(inputDim, outputDim) learningRate = 0.01 optimizer = torch.optim.SGD(model.parameters(), lr=learningRate) epochs = 10 losses = [] for epoch in range(epochs): pass # losses =
criterion = torch.nn.MSELoss() for epoch in range(epochs): inputs = Variable(x_train) labels = Variable(y_train) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() losses.append(loss.item()) # losses = losses
{ "setup_code": "import torch\nimport numpy as np\nfrom torch.autograd import Variable\nclass linearRegression(torch.nn.Module):\n def __init__(self, inputSize, outputSize):\n super().__init__()\n self.linear = torch.nn.Linear(inputSize, outputSize)\n\n def forward(self, x):\n out = self.linear(x)\n return out\ninputDim = 1\noutputDim = 1\nmodel = linearRegression(inputDim, outputDim)\nlearningRate = 0.01\noptimizer = torch.optim.SGD(model.parameters(), lr=learningRate)\ncriterion = torch.nn.MSELoss()", "test_cases": [ "x_values = [i for i in range(11)]\nx_train = np.array(x_values, dtype=np.float32)\nx_train = x_train.reshape(-1, 1)\nx_train = torch.from_numpy(x_train)\ny_values = [2*i + 1 for i in x_values]\ny_train = np.array(y_values, dtype=np.float32)\ny_train = y_train.reshape(-1, 1)\ny_train = torch.from_numpy(y_train)\nepochs = 10\nlosses = []\nfor epoch in range(epochs):\n inputs = Variable(x_train)\n labels = Variable(y_train)\n optimizer.zero_grad()\n outputs = model(inputs)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n losses.append(loss.item())\nassert len(losses) == 10, 'The losses list should contain 10 elements for 10 epochs'", "assert losses[0] > losses[-1], 'The loss should decrease over epochs'", "assert isinstance(losses[0], float), 'Losses should be recorded as float values'" ] }
import tensorflow as tf import numpy as np
x_values = [i for i in range(11)] x_train = np.array(x_values, dtype=np.float32) x_train = x_train.reshape(-1, 1) y_values = [2*i + 1 for i in x_values] y_train = np.array(y_values, dtype=np.float32) y_train = y_train.reshape(-1, 1) model = tf.keras.Sequential([ tf.keras.layers.Dense(units=1, input_shape=[1]) ]) model.compile(optimizer='sgd', loss='mean_squared_error') epochs = 10 losses = [] for epoch in range(epochs): pass # losses =
for epoch in range(epochs): history = model.fit(x_train, y_train, epochs=1, verbose=0) losses.append(history.history['loss'][0]) # losses = losses
{ "setup_code": "import tensorflow as tf\nimport numpy as np\nmodel = tf.keras.Sequential([\n tf.keras.layers.Dense(units=1, input_shape=[1])\n])\nmodel.compile(optimizer='sgd', loss='mean_squared_error')", "test_cases": [ "x_values = [i for i in range(11)]\nx_train = np.array(x_values, dtype=np.float32)\nx_train = x_train.reshape(-1, 1)\ny_values = [2*i + 1 for i in x_values]\ny_train = np.array(y_values, dtype=np.float32)\ny_train = y_train.reshape(-1, 1)\nepochs = 10\nlosses = []\nfor epoch in range(epochs):\n history = model.fit(x_train, y_train, epochs=1, verbose=0)\n losses.append(history.history['loss'][0])\nassert len(losses) == 10, 'The losses list should contain 10 elements for 10 epochs'", "assert losses[0] > losses[-1], 'The loss should decrease over epochs'", "assert isinstance(losses[0], float), 'Losses should be recorded as float values'" ] }
65
import torch import torch.nn as nn
x_values = [i for i in range(11)] y_values = [1 if i > 5 else 0 for i in x_values]
x_train = torch.tensor(x_values, dtype=torch.float32).view(-1, 1) y_train = torch.tensor(y_values, dtype=torch.float32).view(-1, 1)
{ "setup_code": "", "test_cases": [ "assert x_train.shape == (11, 1), 'The shape of x_train should be (11, 1)'", "assert y_train.shape == (11, 1), 'The shape of y_train should be (11, 1)'", "assert y_train.sum() == 5, 'There should be five positive examples'" ] }
import tensorflow as tf
x_values = [i for i in range(11)] y_values = [1 if i > 5 else 0 for i in x_values]
x_train = tf.constant(x_values, dtype=tf.float32, shape=[11, 1]) y_train = tf.constant(y_values, dtype=tf.float32, shape=[11, 1])
{ "setup_code": "", "test_cases": [ "assert x_train.shape == (11, 1), 'The shape of x_train should be (11, 1)'", "assert y_train.shape == (11, 1), 'The shape of y_train should be (11, 1)'", "assert tf.reduce_sum(y_train) == 5, 'There should be five positive examples'" ] }
66
import torch import torch.nn as nn
class LogisticRegression(nn.Module): def __init__(self, inputSize, outputSize): super().__init__() # self.linear = ... # self.sigmoid = ...
class LogisticRegression(nn.Module): def __init__(self, inputSize, outputSize): super().__init__() self.linear = nn.Linear(inputSize, outputSize) self.sigmoid = nn.Sigmoid() def forward(self, x): return self.sigmoid(self.linear(x))
{ "setup_code": "model = LogisticRegression(1, 1)\ninput_tensor = torch.tensor([[1.0]])", "test_cases": [ "assert isinstance(model.linear, nn.Linear), 'The model should contain a linear layer'", "assert isinstance(model.sigmoid, nn.Sigmoid), 'The model should contain a sigmoid activation'", "assert model(input_tensor).shape == (1, 1), 'Output shape should be (1, 1)'" ] }
import tensorflow as tf
class LogisticRegression(tf.keras.Model): def __init__(self, inputSize, outputSize): super().__init__() # self.linear = ... # self.activation = ...
class LogisticRegression(tf.keras.Model): def __init__(self, inputSize, outputSize): super().__init__() self.linear = tf.keras.layers.Dense(outputSize, activation='sigmoid', input_shape=(inputSize,)) def call(self, x): return self.linear(x)
{ "setup_code": "model = LogisticRegression(1, 1)\ninput_tensor = tf.constant([[1.0]])", "test_cases": [ "assert model(input_tensor).shape == (1, 1), 'Output shape should be (1, 1)'", "assert isinstance(model.linear, tf.keras.layers.Dense), 'The model should contain a Dense layer'" ] }
67
import torch from torch.optim import Adam import torch.nn as nn
class LogisticRegression(nn.Module): def __init__(self, inputSize, outputSize): super().__init__() self.linear = nn.Linear(inputSize, outputSize) self.sigmoid = nn.Sigmoid() def forward(self, x): return self.sigmoid(self.linear(x)) model = LogisticRegression(1, 1) # optimizer = ...
optimizer = Adam(model.parameters(), lr=0.01)
{ "setup_code": "model = LogisticRegression(1, 1)", "test_cases": [ "assert isinstance(optimizer, Adam), 'Optimizer should be an instance of torch.optim.Adam'", "assert optimizer.param_groups[0]['lr'] == 0.01, 'Learning rate should be 0.01'" ] }
import tensorflow as tf
model = tf.keras.Sequential([tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(1,))]) # optimizer = ...
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
{ "setup_code": "model = tf.keras.Sequential([tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(1,))])", "test_cases": [ "assert isinstance(optimizer, tf.keras.optimizers.Adam), 'Optimizer should be an instance of tf.keras.optimizers.Adam'", "assert optimizer.learning_rate == 0.01, 'Learning rate should be 0.01'" ] }
68
import torch import torch.nn as nn from torch.optim import Adam
x_train = torch.tensor([[1], [2], [3], [4], [5]], dtype=torch.float32) y_train = torch.tensor([[0], [0], [1], [1], [1]], dtype=torch.float32) class LogisticRegression(nn.Module): def __init__(self, inputSize, outputSize): super().__init__() self.linear = nn.Linear(inputSize, outputSize) self.sigmoid = nn.Sigmoid() def forward(self, x): return self.sigmoid(self.linear(x)) model = LogisticRegression(1, 1) optimizer = Adam(model.parameters(), lr=0.01) epochs = 10 losses = [] # training loop here
criterion = nn.BCELoss() for epoch in range(epochs): optimizer.zero_grad() outputs = model(x_train) loss = criterion(outputs, y_train) loss.backward() optimizer.step() losses.append(loss.item())
{ "setup_code": "model = LogisticRegression(1, 1)", "test_cases": [ "assert len(losses) == 10, 'The losses list should contain 10 elements for 10 epochs'", "assert losses[0] > losses[-1], 'The loss should decrease over epochs'", "assert isinstance(optimizer, Adam), 'Optimizer should be an instance of torch.optim.Adam'" ] }
import tensorflow as tf
x_train = tf.constant([[1], [2], [3], [4], [5]], dtype=tf.float32) y_train = tf.constant([[0], [0], [1], [1], [1]], dtype=tf.float32) model = tf.keras.Sequential([tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(1,))]) model.compile(optimizer='Adam', loss='binary_crossentropy') epochs = 10 losses = [] # training loop here
for epoch in range(epochs): history = model.fit(x_train, y_train, epochs=1, verbose=0) losses.append(history.history['loss'][0])
{ "setup_code": "", "test_cases": [ "assert len(losses) == 10, 'The losses list should contain 10 elements for 10 epochs'", "assert losses[0] > losses[-1], 'The loss should decrease over epochs'", "assert type(model.optimizer) == tf.keras.optimizers.Adam, 'Optimizer should be an instance of tf.keras.optimizers.Adam'" ] }
69
import torch
values = torch.tensor([0.1, -0.5, 0.7], dtype=torch.float32) #result =
atanh_values = torch.atanh(values)
{ "setup_code": "", "test_cases": [ "assert torch.allclose(atanh_values, torch.tensor([0.1003, -0.5493, 0.8673]), atol=1e-4)", "atanh_values = torch.atanh(torch.tensor([-0.3, 0.2, 0.9], dtype=torch.float32))\nassert torch.allclose(atanh_values, torch.tensor([-0.3095, 0.2027, 1.4722]), atol=1e-4)", "atanh_values = torch.atanh(torch.tensor([0.0, 0.99, -0.99], dtype=torch.float32))\nassert torch.allclose(atanh_values, torch.tensor([0.0, 2.6467, -2.6467]), atol=1e-4)" ] }
import tensorflow as tf import numpy as np
values = tf.constant([0.1, -0.5, 0.7], dtype=tf.float32) #result =
atanh_values = tf.math.atanh(values)
{ "setup_code": "", "test_cases": [ "assert np.allclose(atanh_values.numpy(), [0.1003, -0.5493, 0.8673], atol=1e-4)", "atanh_values = tf.math.atanh(tf.constant([-0.3, 0.2, 0.9], dtype=tf.float32))\nassert np.allclose(atanh_values.numpy(), [-0.3095, 0.2027, 1.4722], atol=1e-4)", "atanh_values = tf.math.atanh(tf.constant([0.0, 0.99, -0.99], dtype=tf.float32))\nassert np.allclose(atanh_values.numpy(), [0.0, 2.6467, -2.6467], atol=1e-4)" ] }
70
import torch import torch.nn as nn import torch.nn.functional as F
class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super().__init__() def forward(self, input, hidden): pass def initHidden(self): pass # n_hidden = 128 # rnn = RNN(input_size, n_hidden, output_size)
class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super().__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(input_size, hidden_size) self.h2h = nn.Linear(hidden_size, hidden_size) self.h2o = nn.Linear(hidden_size, output_size) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input, hidden): hidden = F.tanh(self.i2h(input) + self.h2h(hidden)) output = self.h2o(hidden) output = self.softmax(output) return output, hidden def initHidden(self): return torch.zeros(1, self.hidden_size) # n_hidden = 128 # rnn = RNN(input_size, n_hidden, output_size)
{ "setup_code": "input_size = 10\nn_hidden = 128\noutput_size = 2\nrnn = RNN(input_size, n_hidden, output_size)\ninput = torch.randn(1, input_size)\nhidden = torch.zeros(1, n_hidden)", "test_cases": [ "output, hidden = rnn(input, hidden)\nassert output.shape == (1, output_size), 'Output shape is incorrect'", "assert hidden.shape == (1, n_hidden), 'Hidden state shape is incorrect'", "assert isinstance(rnn.i2h, nn.Linear) and isinstance(rnn.h2h, nn.Linear) and isinstance(rnn.h2o, nn.Linear), 'Model layers are not correctly implemented'" ] }
import tensorflow as tf
class RNN(tf.keras.Model): def __init__(self, input_size, hidden_size, output_size): super().__init__() def call(self, inputs, hidden): pass def initHidden(self): pass # n_hidden = 128 # rnn = RNN(input_size, n_hidden, output_size)
class RNN(tf.keras.Model): def __init__(self, input_size, hidden_size, output_size): super().__init__() self.hidden_size = hidden_size self.i2h = tf.keras.layers.Dense(hidden_size, activation='tanh') self.h2h = tf.keras.layers.Dense(hidden_size, activation='tanh') self.h2o = tf.keras.layers.Dense(output_size) self.softmax = tf.keras.layers.Activation('softmax') def call(self, inputs, hidden): hidden = self.i2h(inputs) + self.h2h(hidden) hidden = tf.nn.tanh(hidden) output = self.h2o(hidden) output = self.softmax(output) return output, hidden def initHidden(self): return tf.zeros((1, self.hidden_size)) # n_hidden = 128 # rnn = RNN(input_size, n_hidden, output_size)
{ "setup_code": "input_size = 10\nn_hidden = 128\noutput_size = 2\nrnn = RNN(input_size, n_hidden, output_size)\ninputs = tf.random.normal([1, input_size])\nhidden = tf.zeros([1, n_hidden])", "test_cases": [ "output, hidden = rnn(inputs, hidden)\nassert output.shape == (1, output_size), 'Output shape is incorrect'", "assert hidden.shape == (1, n_hidden), 'Hidden state shape is incorrect'", "assert isinstance(rnn.i2h, tf.keras.layers.Dense) and isinstance(rnn.h2h, tf.keras.layers.Dense) and isinstance(rnn.h2o, tf.keras.layers.Dense), 'Model layers are not correctly implemented'" ] }
71
import torch
def accuracy(y_pred, y_true): pass
def accuracy(y_pred, y_true): correct = torch.sum(y_pred == y_true) total = y_true.size(0) return correct.float() / total
{ "setup_code": "y_pred = torch.tensor([0, 2, 1, 3])\ny_true = torch.tensor([0, 1, 2, 3])", "test_cases": [ "assert accuracy(y_pred, y_true) == 0.5, 'Test failed: Accuracy should be 0.5'", "assert accuracy(torch.tensor([0, 1, 2, 3]), torch.tensor([0, 1, 2, 3])) == 1, 'Test failed: Accuracy should be 1 for all correct predictions'", "assert accuracy(torch.tensor([0, 0, 0, 0]), torch.tensor([0, 1, 2, 3])) == 0.25, 'Test failed: Accuracy should be 0.25'" ] }
import tensorflow as tf
def accuracy(y_pred, y_true): pass
def accuracy(y_pred, y_true): correct = tf.reduce_sum(tf.cast(tf.equal(y_pred, y_true), dtype=tf.float32)) total = tf.cast(tf.size(y_true), dtype=tf.float32) return correct / total
{ "setup_code": "y_pred = tf.constant([0, 2, 1, 3])\ny_true = tf.constant([0, 1, 2, 3])", "test_cases": [ "assert accuracy(y_pred, y_true).numpy() == 0.5, 'Test failed: Accuracy should be 0.5'", "assert accuracy(tf.constant([0, 1, 2, 3]), tf.constant([0, 1, 2, 3])).numpy() == 1, 'Test failed: Accuracy should be 1 for all correct predictions'", "assert accuracy(tf.constant([0, 0, 0, 0]), tf.constant([0, 1, 2, 3])).numpy() == 0.25, 'Test failed: Accuracy should be 0.25'" ] }
72
import torch
def precision(y_pred, y_true): pass
def precision(y_pred, y_true): true_positive = torch.sum((y_pred == 1) & (y_true == 1)) predicted_positive = torch.sum(y_pred == 1) return true_positive.float() / predicted_positive
{ "setup_code": "y_pred = torch.tensor([1, 0, 1, 0])\ny_true = torch.tensor([1, 0, 0, 1])", "test_cases": [ "assert precision(y_pred, y_true) == 0.5, 'Test failed: Precision should be 0.5'", "assert precision(torch.tensor([1, 1, 1, 1]), torch.tensor([1, 1, 1, 1])) == 1, 'Test failed: Precision should be 1'", "assert precision(torch.tensor([1, 1, 1, 1]), torch.tensor([0, 0, 0, 0])) == 0, 'Test failed: Precision should be 0'" ] }
import tensorflow as tf
def precision(y_pred, y_true): pass
def precision(y_pred, y_true): true_positive = tf.reduce_sum(tf.cast(tf.logical_and(y_pred == 1, y_true == 1), dtype=tf.float32)) predicted_positive = tf.reduce_sum(tf.cast(y_pred == 1, dtype=tf.float32)) return true_positive / predicted_positive
{ "setup_code": "y_pred = tf.constant([1, 0, 1, 0])\ny_true = tf.constant([1, 0, 0, 1])", "test_cases": [ "assert precision(y_pred, y_true).numpy() == 0.5, 'Test failed: Precision should be 0.5'", "assert precision(tf.constant([1, 1, 1, 1]), tf.constant([1, 1, 1, 1])).numpy() == 1, 'Test failed: Precision should be 1'", "assert precision(tf.constant([1, 1, 1, 1]), tf.constant([0, 0, 0, 0])).numpy() == 0, 'Test failed: Precision should be 0'" ] }
73
import torch
def recall(y_pred, y_true): pass
def recall(y_pred, y_true): true_positive = torch.sum((y_pred == 1) & (y_true == 1)) actual_positive = torch.sum(y_true == 1) return true_positive.float() / actual_positive
{ "setup_code": "y_pred = torch.tensor([1, 0, 1, 0])\ny_true = torch.tensor([1, 0, 0, 1])", "test_cases": [ "assert recall(y_pred, y_true) == 0.5, 'Test failed: Recall should be 0.5'", "assert recall(torch.tensor([1, 1, 1, 1]), torch.tensor([1, 1, 1, 1])) == 1, 'Test failed: Recall should be 1'", "assert recall(torch.tensor([0, 0, 0, 0]), torch.tensor([1, 1, 1, 1])) == 0, 'Test failed: Recall should be 0'" ] }
import tensorflow as tf
def recall(y_pred, y_true): pass
def recall(y_pred, y_true): true_positive = tf.reduce_sum(tf.cast(tf.logical_and(y_pred == 1, y_true == 1), dtype=tf.float32)) actual_positive = tf.reduce_sum(tf.cast(y_true == 1, dtype=tf.float32)) return true_positive / actual_positive
{ "setup_code": "y_pred = tf.constant([1, 0, 1, 0])\ny_true = tf.constant([1, 0, 0, 1])", "test_cases": [ "assert recall(y_pred, y_true).numpy() == 0.5, 'Test failed: Recall should be 0.5'", "assert recall(tf.constant([1, 1, 1, 1]), tf.constant([1, 1, 1, 1])).numpy() == 1, 'Test failed: Recall should be 1'", "assert recall(tf.constant([0, 0, 0, 0]), tf.constant([1, 1, 1, 1])).numpy() == 0, 'Test failed: Recall should be 0'" ] }
74
import torch
def f1_score(y_pred, y_true): pass
def f1_score(y_pred, y_true): true_positive = torch.sum((y_pred == 1) & (y_true == 1)) predicted_positive = torch.sum(y_pred == 1) actual_positive = torch.sum(y_true == 1) precision = true_positive.float() / predicted_positive if predicted_positive else 0 recall = true_positive.float() / actual_positive if actual_positive else 0 return 2 * (precision * recall) / (precision + recall) if (precision + recall) else 0
{ "setup_code": "y_pred = torch.tensor([1, 0, 1, 0])\ny_true = torch.tensor([1, 1, 1, 0])", "test_cases": [ "assert torch.isclose(f1_score(y_pred, y_true), torch.tensor(0.8), atol=1e-6), 'Test failed: F1 score calculation is incorrect'", "assert f1_score(torch.tensor([0, 1, 0, 0, 1, 0]), torch.tensor([0, 1, 1, 0, 0, 1])) == 0.4, 'Test failed: F1 score should be 1 for perfect precision and recall'", "assert f1_score(torch.tensor([0, 0, 0, 0]), torch.tensor([1, 1, 1, 1])) == 0, 'Test failed: F1 score should be 0 when there are no true positives'" ] }
import tensorflow as tf
def f1_score(y_pred, y_true): pass
def f1_score(y_pred, y_true): true_positive = tf.reduce_sum(tf.cast(tf.logical_and(y_pred == 1, y_true == 1), dtype=tf.float32)) predicted_positive = tf.reduce_sum(tf.cast(y_pred == 1, dtype=tf.float32)) actual_positive = tf.reduce_sum(tf.cast(y_true == 1, dtype=tf.float32)) precision = true_positive / predicted_positive if predicted_positive != 0 else 0 recall = true_positive / actual_positive if actual_positive != 0 else 0 return 2 * (precision * recall) / (precision + recall) if (precision + recall) != 0 else 0
{ "setup_code": "y_pred = tf.constant([1, 0, 1, 0])\ny_true = tf.constant([1, 1, 1, 0])", "test_cases": [ "assert tf.experimental.numpy.isclose(f1_score(y_pred, y_true), 0.8, atol=1e-6).numpy(), 'Test failed: F1 score calculation is incorrect'", "assert tf.experimental.numpy.isclose(f1_score(tf.constant([0, 1, 0, 0, 1, 0]), tf.constant([0, 1, 1, 0, 0, 1])), 0.4, atol=1e-6), 'Test failed: F1 score should be 1 for perfect precision and recall'", "assert f1_score(tf.constant([0, 0, 0, 0]), tf.constant([1, 1, 1, 1])) == 0, 'Test failed: F1 score should be 0 when there are no true positives'" ] }
75
import torch
tensor = torch.rand(4, 4, 3, 1) # tensor_squeezed =
tensor_squeezed = torch.squeeze(tensor)
{ "setup_code": "import torch\ntensor = torch.rand(4, 4, 3, 1)", "test_cases": [ "assert tensor_squeezed.shape == (4, 4, 3), 'Shape after squeeze should be (4, 4, 3)'", "assert tensor_squeezed.dim() == 3, 'Dimension after squeeze should be 3'", "assert not any(d == 1 for d in tensor_squeezed.shape), 'No dimension should be of size 1'" ] }
import tensorflow as tf
tensor = tf.random.uniform((4, 4, 3, 1)) # tensor_squeezed =
tensor_squeezed = tf.squeeze(tensor)
{ "setup_code": "import tensorflow as tf\ntensor = tf.random.uniform((4, 4, 3, 1))", "test_cases": [ "assert tensor_squeezed.shape == (4, 4, 3), 'Shape after squeeze should be (4, 4, 3)'", "assert len(tensor_squeezed.shape) == 3, 'Dimension after squeeze should be 3'", "assert all(dim != 1 for dim in tensor_squeezed.shape), 'No dimension should be of size 1'" ] }
76
import torch
matrix = torch.zeros(4, 4) # Fill the matrix according to the conditions
matrix.fill_diagonal_(1) upper_indices = torch.triu_indices(row=4, col=4, offset=1) matrix[upper_indices[0], upper_indices[1]] = 2
{ "setup_code": "", "test_cases": [ "assert torch.all(matrix.diagonal() == torch.ones(4)), 'Diagonal elements should be 1'", "assert torch.all(matrix.tril(diagonal=-1) == 0), 'Elements below the diagonal should be 0'", "assert torch.all(matrix.triu(diagonal=1) == torch.tensor([[0, 2, 2, 2], [0, 0, 2, 2], [0, 0, 0, 2], [0, 0, 0, 0]])), 'Elements above the diagonal should be 2'" ] }
import tensorflow as tf
matrix = tf.zeros((4, 4), dtype=tf.float32) # Fill the matrix according to the conditions
diag_values = tf.ones((4,), dtype=tf.float32) matrix = tf.linalg.set_diag(matrix, diag_values) upper_mask = tf.linalg.band_part(tf.ones((4, 4), dtype=tf.float32), 0, -1) - tf.linalg.band_part(tf.ones((4, 4), dtype=tf.float32), 0, 0) matrix += 2 * upper_mask
{ "setup_code": "", "test_cases": [ "assert tf.reduce_all(tf.linalg.diag_part(matrix) == 1), 'Diagonal elements should be 1'", "assert tf.reduce_all(matrix == tf.constant([[1, 2, 2, 2], [0, 1, 2, 2], [0, 0, 1, 2], [0, 0, 0, 1]], dtype=tf.float32)), \"Incorrect matrix values\"" ] }
77
import torch
tensor = torch.rand(3, 2) # transposed_tensor =
transposed_tensor = tensor.t()
{ "setup_code": "", "test_cases": [ "assert transposed_tensor.shape == (2, 3), 'Transposed tensor shape should be 2x3'", "assert torch.equal(transposed_tensor, tensor.transpose(0, 1)), 'Transposed tensor should match the manually transposed tensor'" ] }
import tensorflow as tf
tensor = tf.random.uniform((3, 2)) # transposed_tensor =
transposed_tensor = tf.transpose(tensor)
{ "setup_code": "", "test_cases": [ "assert transposed_tensor.shape == (2, 3), 'Transposed tensor shape should be 2x3'", "assert tf.reduce_all(tf.equal(transposed_tensor, tf.transpose(tensor))), 'Transposed tensor should match the manually transposed tensor'" ] }
78
import torch
tensor_a = torch.rand(3, 2) tensor_b = torch.rand(3, 3) # concatenated_tensor =
concatenated_tensor = torch.cat((tensor_a, tensor_b), dim=1)
{ "setup_code": "", "test_cases": [ "assert concatenated_tensor.shape == (3, 5), 'Concatenated tensor shape should be 3x5'", "assert torch.all(concatenated_tensor[:, :2] == tensor_a), 'First part of concatenated tensor should match tensor_a'", "assert torch.all(concatenated_tensor[:, 2:] == tensor_b), 'Second part of concatenated tensor should match tensor_b'" ] }
import tensorflow as tf
tensor_a = tf.random.uniform((3, 2)) tensor_b = tf.random.uniform((3, 3)) # concatenated_tensor =
concatenated_tensor = tf.concat([tensor_a, tensor_b], axis=1)
{ "setup_code": "", "test_cases": [ "assert concatenated_tensor.shape == (3, 5), 'Concatenated tensor shape should be 3x5'", "assert tf.reduce_all(tf.equal(concatenated_tensor[:, :2], tensor_a)), 'First part of concatenated tensor should match tensor_a'", "assert tf.reduce_all(tf.equal(concatenated_tensor[:, 2:], tensor_b)), 'Second part of concatenated tensor should match tensor_b'" ] }
79
import torch
tensor = torch.rand(4, 6) # tensor_a, tensor_b =
tensor_a, tensor_b = torch.split(tensor, 3, dim=1)
{ "setup_code": "", "test_cases": [ "assert tensor_a.shape == (4, 3) and tensor_b.shape == (4, 3), 'Each split tensor should have shape 4x3'", "assert torch.all(torch.cat((tensor_a, tensor_b), dim=1) == tensor), 'Concatenating the split tensors should recreate the original tensor'" ] }
import tensorflow as tf
tensor = tf.random.uniform((4, 6)) # tensor_a, tensor_b =
tensor_a, tensor_b = tf.split(tensor, num_or_size_splits=2, axis=1)
{ "setup_code": "", "test_cases": [ "assert tensor_a.shape == (4, 3) and tensor_b.shape == (4, 3), 'Each split tensor should have shape 4x3'", "assert tf.reduce_all(tf.concat([tensor_a, tensor_b], axis=1) == tensor), 'Concatenating the split tensors should recreate the original tensor'" ] }
80
import torch
tensor_a = torch.rand(3, 4) tensor_b = torch.rand(4, 5) # result_tensor =
result_tensor = torch.matmul(tensor_a, tensor_b)
{ "setup_code": "", "test_cases": [ "assert result_tensor.shape == (3, 5), 'The result of matrix multiplication should have shape 3x5'", "assert torch.allclose(result_tensor, tensor_a @ tensor_b), 'Result tensor should match the output of direct @ operation'" ] }
import tensorflow as tf
tensor_a = tf.random.uniform((3, 4)) tensor_b = tf.random.uniform((4, 5)) # result_tensor =
result_tensor = tf.matmul(tensor_a, tensor_b)
{ "setup_code": "", "test_cases": [ "assert result_tensor.shape == (3, 5), 'The result of matrix multiplication should have shape 3x5'", "assert tf.reduce_all(tf.equal(result_tensor, tf.matmul(tensor_a, tensor_b))), 'Result tensor should match the output of tf.matmul'" ] }
81
import torch
tensor_a = torch.rand(3, 3) tensor_b = torch.rand(3, 3) # result_tensor =
result_tensor = tensor_a + tensor_b
{ "setup_code": "import torch\ntensor_a = torch.rand(3, 3)\ntensor_b = torch.rand(3, 3)", "test_cases": [ "assert result_tensor.shape == (3, 3), 'Result tensor should have shape 3x3'" ] }
import tensorflow as tf
tensor_a = tf.random.uniform((3, 3)) tensor_b = tf.random.uniform((3, 3)) # result_tensor =
result_tensor = tensor_a + tensor_b
{ "setup_code": "import tensorflow as tf\ntensor_a = tf.random.uniform((3, 3))\ntensor_b = tf.random.uniform((3, 3))", "test_cases": [ "assert result_tensor.shape == (3, 3), 'Result tensor should have shape 3x3'" ] }
82
import torch
tensor = torch.rand(4, 4) # mean_tensor =
mean_tensor = tensor.mean(dim=0)
{ "setup_code": "", "test_cases": [ "assert mean_tensor.shape == (4,), 'Mean tensor should have reduced shape'" ] }
import tensorflow as tf
tensor = tf.random.uniform((4, 4)) # mean_tensor =
mean_tensor = tf.reduce_mean(tensor, axis=0)
{ "setup_code": "", "test_cases": [ "assert mean_tensor.shape == (4,), 'Mean tensor should have reduced shape'" ] }
83
import torch
# linspace_tensor =
linspace_tensor = torch.linspace(1, 10, 10)
{ "setup_code": "import torch", "test_cases": [ "assert linspace_tensor.size(0) == 10, 'Tensor should contain 10 elements'", "assert linspace_tensor[0] == 1 and linspace_tensor[-1] == 10, 'Tensor should start at 1 and end at 10'", "assert torch.allclose(linspace_tensor[1] - linspace_tensor[0], torch.ones(1) * (linspace_tensor[2] - linspace_tensor[1])), 'Elements should be evenly spaced'" ] }
import tensorflow as tf
# linspace_tensor =
linspace_tensor = tf.linspace(1.0, 10.0, 10)
{ "setup_code": "import tensorflow as tf", "test_cases": [ "assert tf.size(linspace_tensor) == 10, 'Tensor should contain 10 elements'", "assert linspace_tensor[0] == 1 and linspace_tensor[-1] == 10, 'Tensor should start at 1 and end at 10'", "assert tf.reduce_all(tf.equal(linspace_tensor[1] - linspace_tensor[0], linspace_tensor[2] - linspace_tensor[1])), 'Elements should be evenly spaced'" ] }
84
import torch
tensor = torch.rand(3, 4) # tensors_unbound =
tensors_unbound = torch.unbind(tensor, dim=0)
{ "setup_code": "import torch\ntensor = torch.rand(3, 4)", "test_cases": [ "assert len(tensors_unbound) == 3, 'There should be three unbound tensors'", "all(tensor.shape == (4,) for tensor in tensors_unbound), 'Each unbound tensor should have shape (4,)'" ] }
import tensorflow as tf
tensor = tf.random.uniform((3, 4)) # tensors_unbound =
tensors_unbound = tf.unstack(tensor, axis=0)
{ "setup_code": "import tensorflow as tf\ntensor = tf.random.uniform((3, 4))", "test_cases": [ "assert len(tensors_unbound) == 3, 'There should be three unbound tensors'", "all(tensor.shape == (4,) for tensor in tensors_unbound), 'Each unbound tensor should have shape (4,)'" ] }
85
import torch
prob_tensor = torch.tensor([[0.3, 0.6, 0.9], [0.1, 0.4, 0.7]]) # bernoulli_tensor =
bernoulli_tensor = torch.bernoulli(prob_tensor)
{ "setup_code": "", "test_cases": [ "assert bernoulli_tensor.shape == (2, 3), 'The shape of the Bernoulli tensor should be (2, 3)'", "assert torch.all((bernoulli_tensor == 0) | (bernoulli_tensor == 1)), 'All elements in the Bernoulli tensor should be 0 or 1'", "assert torch.all((bernoulli_tensor.sum(dim=1) >= 0) & (bernoulli_tensor.sum(dim=1) <= 3)), 'Sum of each row should be between 0 and 3 inclusive, representing the number of successes'" ] }
import tensorflow as tf
prob_tensor = tf.constant([[0.3, 0.6, 0.9], [0.1, 0.4, 0.7]]) # bernoulli_tensor =
bernoulli_tensor = tf.cast(tf.random.uniform((2, 3)) < prob_tensor, tf.int32)
{ "setup_code": "", "test_cases": [ "assert bernoulli_tensor.shape == (2, 3), 'The shape of the Bernoulli tensor should be (2, 3)'", "assert tf.reduce_all((bernoulli_tensor == 0) | (bernoulli_tensor == 1)), 'All elements in the Bernoulli tensor should be 0 or 1'", "assert tf.reduce_all((tf.reduce_sum(bernoulli_tensor, axis=1) >= 0) & (tf.reduce_sum(bernoulli_tensor, axis=1) <= 3)), 'Sum of each row should be between 0 and 3 inclusive, representing the number of successes'" ] }
86
import torch
tensor = torch.tensor([[1.0, 3.0, 5.0, 7.0, 9.0], [2.0, 4.0, 6.0, 8.0, 10.0]]) # top_values, top_indices =
top_values, top_indices = torch.topk(tensor, 3, dim=1)
{ "setup_code": "", "test_cases": [ "assert top_values.shape == (2, 3), 'The shape of top values tensor should be (2, 3)'", "assert top_indices.shape == (2, 3), 'The shape of top indices tensor should be (2, 3)'", "assert torch.all(top_values == torch.tensor([[9.0, 7.0, 5.0], [10.0, 8.0, 6.0]])), 'Top values are not correct'", "assert torch.all(top_indices == torch.tensor([[4, 3, 2], [4, 3, 2]])), 'Top indices are not correct'" ] }
import tensorflow as tf
tensor = tf.constant([[1.0, 3.0, 5.0, 7.0, 9.0], [2.0, 4.0, 6.0, 8.0, 10.0]]) # values, indices =
values, indices = tf.math.top_k(tensor, 3, sorted=True)
{ "setup_code": "import tensorflow as tf\ntensor = tf.constant([[1.0, 3.0, 5.0, 7.0, 9.0], [2.0, 4.0, 6.0, 8.0, 10.0]])", "test_cases": [ "assert values.shape == (2, 3), 'The shape of top values tensor should be (2, 3)'", "assert indices.shape == (2, 3), 'The shape of top indices tensor should be (2, 3)'", "assert tf.reduce_all(tf.equal(values, tf.constant([[9.0, 7.0, 5.0], [10.0, 8.0, 6.0]]))), 'Top values are not correct'", "assert tf.reduce_all(tf.equal(indices, tf.constant([[4, 3, 2], [4, 3, 2]]))), 'Top indices are not correct'" ] }
87
import torch
# full_tensor =
full_tensor = torch.full((3, 3), 7)
{ "setup_code": "import torch", "test_cases": [ "assert full_tensor.shape == (3, 3), 'The shape of the full tensor should be (3, 3)'", "assert torch.all(full_tensor == 7), 'All elements in the tensor should be 7'", "assert full_tensor.dtype == torch.int64, 'The dtype of the tensor should be torch.int64'" ] }
import tensorflow as tf
# full_tensor =
full_tensor = tf.fill([3, 3], 7)
{ "setup_code": "import tensorflow as tf", "test_cases": [ "assert full_tensor.shape == (3, 3), 'The shape of the full tensor should be (3, 3)'", "assert tf.reduce_all(full_tensor == 7), 'All elements in the tensor should be 7'", "assert full_tensor.dtype == tf.int32, 'The dtype of the tensor should be tf.int32'" ] }
88
import torch
matrix = torch.tensor([[1, 0, 0], [0, 2, 0], [0, 0, 3]]) # trace_value =
trace_value = torch.trace(matrix)
{ "setup_code": "", "test_cases": [ "assert trace_value == 6, 'The trace of the matrix should be 6'", "assert isinstance(trace_value, torch.Tensor), 'The trace value should be a tensor'", "assert trace_value.item() == 6, 'The trace of the matrix calculated incorrectly'" ] }
import tensorflow as tf
matrix = tf.constant([[1, 0, 0], [0, 2, 0], [0, 0, 3]]) # trace_value =
trace_value = tf.linalg.trace(matrix)
{ "setup_code": "", "test_cases": [ "assert trace_value.numpy() == 6, 'The trace of the matrix should be 6'", "assert isinstance(trace_value, tf.Tensor), 'The trace value should be a tensor'", "assert trace_value.numpy() == 6, 'The trace of the matrix calculated incorrectly'" ] }
89
import torch
tensor = torch.randn(3, 3, 3) # total_elements =
total_elements = torch.numel(tensor)
{ "setup_code": "", "test_cases": [ "assert total_elements == 27, 'The total number of elements should be 27'", "assert isinstance(total_elements, int), 'The return type should be an integer'" ] }
import tensorflow as tf
tensor = tf.random.normal([3, 3, 3]) # total_elements =
total_elements = tf.size(tensor).numpy().item()
{ "setup_code": "", "test_cases": [ "assert total_elements == 27, 'The total number of elements should be 27'", "assert isinstance(total_elements, int), 'The return type should be an integer'" ] }
90
import torch
real = torch.tensor([1.0, 2.0, 3.0, 4.0]).reshape(2, 2) imag = torch.tensor([5.0, 6.0, 7.0, 8.0]).reshape(2, 2) # complex_tensor =
complex_tensor = torch.complex(real, imag)
{ "setup_code": "", "test_cases": [ "assert complex_tensor.shape == (2, 2), 'The shape of the complex tensor should be (2, 2)'", "assert torch.all(torch.eq(complex_tensor.real, real)), 'The real parts of the complex tensor do not match'", "assert torch.all(torch.eq(complex_tensor.imag, imag)), 'The imaginary parts of the complex tensor do not match'" ] }
import tensorflow as tf
real = tf.constant([1.0, 2.0, 3.0, 4.0], shape=(2, 2)) imag = tf.constant([5.0, 6.0, 7.0, 8.0], shape=(2, 2)) # complex_tensor =
complex_tensor = tf.complex(real, imag)
{ "setup_code": "", "test_cases": [ "assert complex_tensor.shape == (2, 2), 'The shape of the complex tensor should be (2, 2)'", "assert tf.reduce_all(tf.equal(tf.math.real(complex_tensor), real)), 'The real parts of the complex tensor do not match'", "assert tf.reduce_all(tf.equal(tf.math.imag(complex_tensor), imag)), 'The imaginary parts of the complex tensor do not match'" ] }
91
import torch import torchvision from torchvision import datasets, transforms
transform = transforms.Compose([transforms.ToTensor()]) # train_set = datasets.FashionMNIST(...) # test_set = datasets.FashionMNIST(...)
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) train_set = torchvision.datasets.FashionMNIST( root='./data/FashionMNIST', train=True, download=True, transform=transform ) test_set = torchvision.datasets.FashionMNIST( root='./data/FashionMNIST', train=False, download=True, transform=transform )
{ "setup_code": "from torch.utils.data import DataLoader\ntrain_set = torchvision.datasets.FashionMNIST(root='./data/FashionMNIST', train=True, download=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]))\ntest_set = torchvision.datasets.FashionMNIST(root='./data/FashionMNIST', train=False, download=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]))", "test_cases": [ "train_loader = DataLoader(train_set, batch_size=64, shuffle=True)\ntest_loader = DataLoader(test_set, batch_size=64, shuffle=False)\nassert len(train_loader.dataset) == 60000, 'The training set should contain 60000 images'", "assert len(test_loader.dataset) == 10000, 'The test set should contain 10000 images'", "sample_image, sample_label = next(iter(train_loader))\nassert sample_image.shape == (64, 1, 28, 28), 'Each image should have shape 1x28x28'" ] }
import tensorflow as tf
# Load the Fashion MNIST dataset using TensorFlow
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() x_train = x_train.reshape(x_train.shape[0], 28, 28, 1).astype('float32') / 255 x_test = x_test.reshape(x_test.shape[0], 28, 28, 1).astype('float32') / 255 y_train = tf.keras.utils.to_categorical(y_train, 10) y_test = tf.keras.utils.to_categorical(y_test, 10)
{ "setup_code": "(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\nx_train = x_train.reshape(x_train.shape[0], 28, 28, 1).astype('float32') / 255\nx_test = x_test.reshape(x_test.shape[0], 28, 28, 1).astype('float32') / 255\ny_train = tf.keras.utils.to_categorical(y_train, 10)\ny_test = tf.keras.utils.to_categorical(y_test, 10)", "test_cases": [ "assert x_train.shape == (60000, 28, 28, 1), 'Training data shape should be (60000, 28, 28, 1)'", "assert x_test.shape == (10000, 28, 28, 1), 'Test data shape should be (10000, 28, 28, 1)'", "assert y_train.shape == (60000, 10), 'Training labels should be one-hot encoded with shape (60000, 10)'", "assert y_test.shape == (10000, 10), 'Test labels should be one-hot encoded with shape (10000, 10)'" ] }
92
import torch import torch.nn as nn import torch.nn.functional as F
class CNNModel(nn.Module): def __init__(self, input_shape): super().__init__()
class CNNModel(nn.Module): def __init__(self, input_shape): super().__init__() self.model = nn.Sequential( nn.Conv2d(1, 32, kernel_size=3), nn.ReLU(), nn.MaxPool2d(kernel_size=2), nn.Conv2d(32, 64, kernel_size=3), nn.ReLU(), nn.MaxPool2d(kernel_size=2), nn.Flatten(), nn.Dropout(0.5), nn.Linear(64 * 5 * 5, 1), nn.Sigmoid() ) model = CNNModel(input_shape=(1, 28, 28))
{ "setup_code": "import torch\nimport torch.nn as nn\nmodel = CNNModel(input_shape=(1, 28, 28))", "test_cases": [ "assert isinstance(model.model[0], nn.Conv2d) and model.model[0].out_channels == 32, 'First Conv2D layer should have 32 output channels'", "assert isinstance(model.model[3], nn.Conv2d) and model.model[3].out_channels == 64, 'Second Conv2D layer should have 64 output channels'", "assert isinstance(model.model[7], nn.Dropout), 'Should have a dropout layer'", "assert isinstance(model.model[9], nn.Sigmoid), 'Output should use sigmoid activation'" ] }
import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers
input_shape = (28, 28, 1) # Define the Keras Sequential model here
model = keras.Sequential([ keras.Input(shape=input_shape), layers.Conv2D(32, kernel_size=(3, 3), activation='relu'), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(64, kernel_size=(3, 3), activation='relu'), layers.MaxPooling2D(pool_size=(2, 2)), layers.Flatten(), layers.Dropout(0.5), layers.Dense(1, activation='sigmoid') ])
{ "setup_code": "", "test_cases": [ "assert model.layers[0].output.shape[1:] == (26, 26, 32), 'Output shape of first Conv2D should be (26, 26, 32)'", "assert model.layers[2].output.shape[1:] == (11, 11, 64), 'Output shape of second Conv2D should be (12, 12, 64)'", "assert isinstance(model.layers[5], layers.Dropout), 'Should have a dropout layer'", "assert model.layers[-1].activation.__name__ == 'sigmoid', 'Output layer should use sigmoid activation'" ] }
93
import torch import torch.nn as nn
class Polynomial3(nn.Module): def __init__(self): super().__init__()
class Polynomial3(nn.Module): def __init__(self): super().__init__() self.a = nn.Parameter(torch.randn(())) self.b = nn.Parameter(torch.randn(())) self.c = nn.Parameter(torch.randn(())) self.d = nn.Parameter(torch.randn(())) def forward(self, x): return self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3 model = Polynomial3()
{ "setup_code": "import torch\nimport torch.nn as nn\nmodel = Polynomial3()", "test_cases": [ "x = torch.tensor(2.0)\nassert isinstance(model(x), torch.Tensor), 'Model should output a tensor'", "x = torch.tensor(1.0)\nresult = model(x)\nexpected = model.a.item() + model.b.item() + model.c.item() + model.d.item()\nassert torch.isclose(result,torch.tensor(expected),atol=1e-5), 'Polynomial calculation does not match expected value'", "x = torch.tensor([2.0, 3.0])\nassert model(x).shape == torch.Size([2]), 'Output shape should match the number of input elements'" ] }
import tensorflow as tf import numpy as np
class Polynomial3(tf.keras.layers.Layer): def __init__(self): super().__init__()
class Polynomial3(tf.keras.layers.Layer): def __init__(self): super().__init__() self.a = self.add_weight(shape=(), initializer='random_normal') self.b = self.add_weight(shape=(), initializer='random_normal') self.c = self.add_weight(shape=(), initializer='random_normal') self.d = self.add_weight(shape=(), initializer='random_normal') def call(self, x): return self.a + self.b * x + self.c * x ** 2 + self.d * x ** 3 model = Polynomial3()
{ "setup_code": "model = Polynomial3()", "test_cases": [ "x = tf.constant(2.0)\nassert isinstance(model(x), tf.Tensor), 'Model should output a tensor'", "x = tf.constant(1.0)\nresult = model(x)\nexpected = model.a.numpy() + model.b.numpy() + model.c.numpy() + model.d.numpy()\nassert np.isclose(result.numpy(), expected, atol=1e-5), 'Polynomial calculation does not match expected value'", "x = tf.constant([2.0, 3.0])\nassert model(x).shape == tf.TensorShape([2]), 'Output shape should match the number of input elements'" ] }
94
import torch import torch.nn as nn
class SimpleModel(nn.Module): def __init__(self): super().__init__() # Define the model layers here
class SimpleModel(nn.Module): def __init__(self): super().__init__() self.layers = nn.Sequential( nn.Linear(3, 2), nn.ReLU(), nn.Linear(2, 3), nn.ReLU(), nn.Linear(3, 4) ) def forward(self, x): return self.layers(x) model = SimpleModel()
{ "setup_code": "import torch\nimport torch.nn as nn\nmodel = SimpleModel()", "test_cases": [ "assert isinstance(model.layers[0], nn.Linear) and model.layers[0].out_features == 2, 'First layer should be Linear with 2 output features'", "assert isinstance(model.layers[1], nn.ReLU), 'Second layer should be ReLU'", "assert isinstance(model.layers[2], nn.Linear) and model.layers[2].out_features == 3, 'Third layer should be Linear with 3 output features'", "assert isinstance(model.layers[4], nn.Linear) and model.layers[4].out_features == 4, 'Fifth layer should be Linear with 4 output features'" ] }
import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers
model = keras.Sequential() # Add layers to the model
model = keras.Sequential([ layers.Dense(2, input_shape=(3,), activation='relu'), layers.Dense(3, activation='relu'), layers.Dense(4) ])
{ "setup_code": "", "test_cases": [ "assert model.layers[0].output.shape == (None, 2), 'Output shape of first Dense layer should be (None, 2)'", "assert model.layers[1].activation.__name__ == 'relu', 'Activation of second Dense layer should be ReLU'", "assert model.layers[2].output.shape == (None, 4), 'Output shape of third Dense layer should be (None, 4)'" ] }
95
import torch import torch.nn as nn import torch.nn.functional as F
class LinearRegression(nn.Module): def __init__(self, input_size, output_size): super().__init__() # Define the model layers here
class LinearRegression(nn.Module): def __init__(self, input_size, output_size): super().__init__() self.f1 = nn.Linear(input_size, 2000) self.f2 = nn.Linear(2000, output_size) def forward(self, x): x = self.f1(x) x = F.leaky_relu(x, negative_slope=0.01) x = F.dropout(x, p=0.3) x = self.f2(x) return torch.sigmoid(x) #model = LinearRegression(input_size, output_size)
{ "setup_code": "model = LinearRegression(10, 1)", "test_cases": [ "assert isinstance(model.f1, nn.Linear) and model.f1.out_features == 2000, 'First layer should be linear with 2000 output features'", "assert model.f1.in_features == 10, 'Input features of the first layer should match input_size'", "input = torch.randn(10)\noutput = model(input)\nassert output.dim() == 1, 'Output should have one dimension'" ] }
import tensorflow as tf from tensorflow.keras import layers
model = tf.keras.Sequential() # Add layers to the model
model = tf.keras.Sequential([ layers.Dense(2000, input_dim=10, activation='leaky_relu'), layers.Dropout(0.3), layers.Dense(1, activation='sigmoid') ])
{ "setup_code": "", "test_cases": [ "assert model.layers[0].output.shape == (None, 2000), 'First layer output shape should be (None, 2000)'", "assert model.layers[0].activation.__name__ == 'leaky_relu', 'First layer activation should be Leaky ReLU'", "import numpy as np\ninput = np.random.rand(10).reshape(1, -1)\noutput = model(input)\nassert output.shape == (1, 1), 'Output shape should be (1, 1)'" ] }
96
import torch import torch.nn as nn import torch.nn.functional as F
class LSTMClassifier(nn.Module): def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, weights): super().__init__() # Initialize model layers here #def forward(self, x, hidden): #pass #def init_hidden(self, batch_size): #pass
class LSTMClassifier(nn.Module): def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, weights): super().__init__() self.output_size = output_size self.n_layers = n_layers self.hidden_dim = hidden_dim self.word_embeddings = nn.Embedding(vocab_size, embedding_dim) self.word_embeddings.weight = nn.Parameter(weights, requires_grad=False) self.dropout_1 = nn.Dropout(0.3) self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=0.3, batch_first=True) self.dropout_2 = nn.Dropout(0.3) self.label_layer = nn.Linear(hidden_dim, output_size) self.act = nn.Sigmoid() def init_hidden(self, batch_size): weight = next(self.parameters()).data return (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(), weight.new(self.n_layers, batch_size, self.hidden_dim).zero_()) def forward(self, x, hidden): x = self.word_embeddings(x) x = self.dropout_1(x) lstm_out, hidden = self.lstm(x, hidden) lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim) out = self.dropout_2(lstm_out) out = self.label_layer(out) out = out.view(-1, self.output_size) out = self.act(out) return out, hidden #model = LSTMClassifier(1000, 10, 300, 256, 2, torch.randn(1000, 300))
{ "setup_code": "model = LSTMClassifier(1000, 10, 300, 256, 2, torch.randn(1000, 300))", "test_cases": [ "x = torch.randint(0, 1000, (1, 5))\nhidden = model.init_hidden(1)\noutput, hidden = model(x, hidden)\nassert isinstance(output, torch.Tensor) and output.shape == (5, 10), 'Output tensor should have shape (5, 10)'", "assert isinstance(model.lstm, nn.LSTM), 'Model should contain an LSTM layer'", "assert model.lstm.dropout == 0.3, 'Dropout in LSTM should be 0.3'", "hidden = model.init_hidden(1)\nassert isinstance(hidden, tuple) and len(hidden) == 2, 'Hidden state should be a tuple with two elements'", "assert isinstance(model.act, nn.Sigmoid), 'Activation function should be sigmoid'" ] }
import tensorflow as tf from tensorflow.keras import layers
class LSTMClassifier(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, hidden_dim, n_layers, output_size, weights): super().__init__() # Initialize model layers here
class LSTMClassifier(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, hidden_dim, n_layers, output_size, weights): super().__init__() self.embedding = layers.Embedding(input_dim=vocab_size, output_dim=embedding_dim, weights=[weights], trainable=False) self.lstm = layers.LSTM(hidden_dim, return_sequences=True, dropout=0.3, recurrent_dropout=0.3) self.dropout = layers.Dropout(0.3) self.dense = layers.Dense(output_size, activation='sigmoid') def call(self, inputs): x = self.embedding(inputs) x = self.lstm(x) x = self.dropout(x) return self.dense(x) model = LSTMClassifier(1000, 300, 256, 2, 10, tf.random.normal((1000, 300)))
{ "setup_code": "model = LSTMClassifier(1000, 300, 256, 2, 10, tf.random.normal((1000, 300)))", "test_cases": [ "x = tf.random.uniform((1, 5), maxval=1000, dtype=tf.int32)\noutput = model(x)\nassert tf.is_tensor(output) and output.shape == (1,5, 10), 'Output tensor should have shape (1,5, 10)'", "assert hasattr(model, 'lstm') and isinstance(model.lstm, layers.LSTM), 'Model should contain an LSTM layer'", "assert model.lstm.dropout == 0.3, 'Dropout in LSTM should be 0.3'", "output = model(x)\nassert tf.nn.sigmoid_cross_entropy_with_logits(logits=output, labels=tf.ones_like(output)).shape == output.shape, 'Output activation should be sigmoid and match the output shape'" ] }
97
import torch import torch.nn as nn import torch.nn.functional as F
class LeNet5(nn.Module): def __init__(self, num_classes): super().__init__() # Define model layers here
class LeNet5(nn.Module): def __init__(self, num_classes): super().__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 6, kernel_size=5, stride=1, padding=0), nn.BatchNorm2d(6), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.layer2 = nn.Sequential( nn.Conv2d(6, 16, kernel_size=5, stride=1, padding=0), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2) ) self.fc = nn.Linear(400, 120) self.relu = nn.ReLU() self.fc1 = nn.Linear(120, 84) self.relu1 = nn.ReLU() self.fc2 = nn.Linear(84, num_classes) def forward(self, x): x = self.layer1(x) x = self.layer2(x) x = x.view(-1, 400) x = self.fc(x) x = self.relu(x) x = self.fc1(x) x = self.relu1(x) x = self.fc2(x) return x
{ "setup_code": "model = LeNet5(num_classes=10)", "test_cases": [ "assert isinstance(model.layer1[0], nn.Conv2d), 'First layer should be a Conv2d'", "assert model.layer1[0].out_channels == 6, 'Output channels of first Conv2d should be 6'", "input = torch.randn(1, 1, 32, 32)\noutput = model(input)\nassert output.size(1) == 10, 'Output size should match number of classes'" ] }
import tensorflow as tf from tensorflow.keras import layers
input_shape = (32, 32, 1) num_classes=10 # Define the Keras Sequential model
model = tf.keras.Sequential([ layers.Conv2D(6, kernel_size=(5, 5), activation='relu', input_shape=input_shape), layers.BatchNormalization(), layers.MaxPooling2D(pool_size=(2, 2)), layers.Conv2D(16, kernel_size=(5, 5), activation='relu'), layers.BatchNormalization(), layers.MaxPooling2D(pool_size=(2, 2)), layers.Flatten(), layers.Dense(120, activation='relu'), layers.Dense(84, activation='relu'), layers.Dense(num_classes, activation='softmax') ])
{ "setup_code": "", "test_cases": [ "assert model.layers[0].output.shape == (None, 28, 28, 6), 'Output shape of first Conv2D layer should be (None, 28, 28, 6)'", "assert isinstance(model.layers[1], layers.BatchNormalization), 'Second layer should be BatchNormalization'", "import numpy as np\ninput = np.random.rand(1, 32, 32, 1).astype('float32')\noutput = model(input)\nassert output.shape == (1, 10), 'Output shape should be (1, 10)'" ] }
98
import torch import torch.nn as nn import torch.nn.functional as F
class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels, stride=1, downsample=None): super().__init__() # Initialize layers here
class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels, stride=1, downsample=None): super().__init__() self.conv1 = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1), nn.BatchNorm2d(out_channels), nn.ReLU() ) self.conv2 = nn.Sequential( nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(out_channels) ) self.downsample = downsample self.relu = nn.ReLU() def forward(self, x): residual = x out = self.conv1(x) out = self.conv2(out) if self.downsample: residual = self.downsample(x) out += residual out = self.relu(out) return out
{ "setup_code": "model = ResidualBlock(3, 3)", "test_cases": [ "assert isinstance(model.conv1[0], nn.Conv2d), 'First layer should be Conv2d'", "assert model.conv1[0].out_channels == 3, 'Incorrect number of output channels in first convolution'", "input = torch.randn(1, 3, 64, 64)\noutput = model(input)\nassert output.shape == (1, 3, 64, 64), 'Output shape should match input shape if no downsampling'" ] }
import tensorflow as tf from tensorflow.keras import layers
class ResidualBlock(tf.keras.layers.Layer): def __init__(self, in_channels, out_channels, stride=1, downsample=None): super().__init__() # Initialize layers here
class ResidualBlock(tf.keras.layers.Layer): def __init__(self, in_channels, out_channels, stride=1, downsample=None): super().__init__() self.conv1 = tf.keras.Sequential([ layers.Conv2D(out_channels, 3, stride, 'same'), layers.BatchNormalization(), layers.ReLU() ]) self.conv2 = tf.keras.Sequential([ layers.Conv2D(out_channels, 3, 1, 'same'), layers.BatchNormalization() ]) self.downsample = downsample def call(self, inputs): residual = inputs x = self.conv1(inputs) x = self.conv2(x) if self.downsample is not None: residual = self.downsample(inputs) x += residual return layers.ReLU()(x)
{ "setup_code": "model = ResidualBlock(3, 3)", "test_cases": [ "assert isinstance(model.conv1.layers[0], layers.Conv2D), 'First layer should be Conv2D'", "assert model.conv1.layers[0].filters == 3, 'Incorrect number of filters in first convolution'", "input = tf.random.normal((1, 64, 64, 3))\noutput = model(input)\nassert output.shape == (1, 64, 64, 3), 'Output shape should match input shape if no downsampling'" ] }
99
import torch import torch.nn as nn import torch.nn.functional as F
class AlexNet(nn.Module): def __init__(self, num_classes=10): super().__init__() # Define model layers here
class AlexNet(nn.Module): def __init__(self, num_classes=10): super().__init__() self.layer1 = nn.Sequential( nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=0), nn.BatchNorm2d(96), nn.ReLU(), nn.MaxPool2d(kernel_size=3, stride=2) ) self.layer2 = nn.Sequential( nn.Conv2d(96, 256, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(256), nn.ReLU(), nn.MaxPool2d(kernel_size=3, stride=2) ) self.layer3 = nn.Sequential( nn.Conv2d(256, 384, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(384), nn.ReLU() ) self.layer4 = nn.Sequential( nn.Conv2d(384, 384, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(384), nn.ReLU() ) self.layer5 = nn.Sequential( nn.Conv2d(384, 256, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(256), nn.ReLU(), nn.MaxPool2d(kernel_size=3, stride=2) ) self.fc = nn.Sequential( nn.Dropout(0.5), nn.Linear(9216, 4096), nn.ReLU() ) self.fc1 = nn.Sequential( nn.Dropout(0.5), nn.Linear(4096, 4096), nn.ReLU() ) self.fc2 = nn.Sequential( nn.Linear(4096, num_classes) ) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) out = self.layer4(out) out = self.layer5(out) out = out.view(out.size(0), -1) out = self.fc(out) out = self.fc1(out) out = self.fc2(out) return out model = AlexNet(num_classes=10)
{ "setup_code": "model = AlexNet(num_classes=10)", "test_cases": [ "assert isinstance(model.layer1[0], nn.Conv2d), 'Layer 1 should contain a Conv2d layer'", "assert model.fc[1].in_features == 9216, 'First fully connected layer should have 9216 input features'", "input = torch.randn(1, 3, 227, 227)\noutput = model(input)\nassert output.size(1) == 10, 'Final output should match the number of classes'" ] }
import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras.models import Sequential
#model = Sequential
model = Sequential([ layers.Conv2D(96, kernel_size=11, strides=4, padding='valid', input_shape=(227, 227, 3)), layers.BatchNormalization(), layers.ReLU(), layers.MaxPooling2D(pool_size=3, strides=2), layers.Conv2D(256, kernel_size=5, strides=1, padding='same'), layers.BatchNormalization(), layers.ReLU(), layers.MaxPooling2D(pool_size=3, strides=2), layers.Conv2D(384, kernel_size=3, strides=1, padding='same'), layers.BatchNormalization(), layers.ReLU(), layers.Conv2D(384, kernel_size=3, strides=1, padding='same'), layers.BatchNormalization(), layers.ReLU(), layers.Conv2D(256, kernel_size=3, strides=1, padding='same'), layers.BatchNormalization(), layers.ReLU(), layers.MaxPooling2D(pool_size=3, strides=2), layers.Flatten(), layers.Dropout(0.5), layers.Dense(4096, activation='relu'), layers.Dropout(0.5), layers.Dense(4096, activation='relu'), layers.Dense(10, activation='softmax') ])
{ "setup_code": "model = Sequential([\n layers.Conv2D(96, kernel_size=11, strides=4, padding='valid', input_shape=(227, 227, 3)),\n layers.BatchNormalization(),\n layers.ReLU(),\n layers.MaxPooling2D(pool_size=3, strides=2),\n layers.Conv2D(256, kernel_size=5, strides=1, padding='same'),\n layers.BatchNormalization(),\n layers.ReLU(),\n layers.MaxPooling2D(pool_size=3, strides=2),\n layers.Conv2D(384, kernel_size=3, strides=1, padding='same'),\n layers.BatchNormalization(),\n layers.ReLU(),\n layers.Conv2D(384, kernel_size=3, strides=1, padding='same'),\n layers.BatchNormalization(),\n layers.ReLU(),\n layers.Conv2D(256, kernel_size=3, strides=1, padding='same'),\n layers.BatchNormalization(),\n layers.ReLU(),\n layers.MaxPooling2D(pool_size=3, strides=2),\n layers.Flatten(),\n layers.Dropout(0.5),\n layers.Dense(4096, activation='relu'),\n layers.Dropout(0.5),\n layers.Dense(4096, activation='relu'),\n layers.Dense(10, activation='softmax')\n])", "test_cases": [ "assert isinstance(model.layers[0], layers.Conv2D), 'First layer should be Conv2D'", "assert model.layers[14].output.shape== (None, 13, 13, 256), 'The shape of the output tensor is incorrect'", "input = tf.random.normal([1, 227, 227, 3])\noutput = model(input)\nassert output.shape == (1, 10), 'Final output shape should match number of classes'" ] }
End of preview.

DS-CodeBridge

A robust benchmark comprising 800 carefully curated bidirectionally translatable tasks across three important stages of data science workflows: Data Querying, Data Manipulation, and Data Mining.

Download the Dataset

from datasets import load_dataset

# Load the dl200 split in streaming mode
dl200_dataset = load_dataset(
    "xia01ongLi/DS-CodeBridge", 
    split="dl200",
    streaming=True  # Enable streaming mode
)

# Get the first example
first_example = next(iter(dl200_dataset))
print("First example:", first_example)
from datasets import load_dataset

# Load the dq300 split in streaming mode
dq300_dataset = load_dataset(
    "xia01ongLi/DS-CodeBridge", 
    split="dq300",
    streaming=True  # Enable streaming mode
)

# Get the first example
first_example = next(iter(dq300_dataset))
print("First example:", first_example)
from datasets import load_dataset

# Load the ds300 split in streaming mode
ds300_dataset = load_dataset(
    "xia01ongLi/DS-CodeBridge", 
    split="ds300",
    streaming=True  # Enable streaming mode
)

# Get the first example
first_example = next(iter(ds300_dataset))
print("First example:", first_example)

DQ Database setup

Postgresql Setup

  • Download and install the postgresql from the official website: https://www.postgresql.org/download/
  • Download the pgAdmin4 from the official website: https://www.pgadmin.org/download/ (Recommended to monitor the database)
  • In pgADmin4/terminal create a new database you prefer
  • Construct the database by run the following command (You can find PostgreSQL version database in the data folder):
psql -U USERNAME -d DB_NAME -f postgresql_db.sql

Pandas DB Setup

  • Donwload the Pandas version database from the data folder
Downloads last month
17