SuperNived commited on
Commit
63b049a
·
verified ·
1 Parent(s): 4b3a596

Upload converted_data.csv

Browse files
Files changed (1) hide show
  1. converted_data.csv +262 -0
converted_data.csv ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Instruction,Input,Output
2
+ ,"Flagship Models
3
+ GPT-4o (New)
4
+ Description: Our fastest and most affordable flagship model
5
+ Capabilities: Text and image input, text output
6
+ Context Length: 128k tokens
7
+ Pricing: $5 | Output: $15* (per 1 million tokens)
8
+ GPT-4 Turbo
9
+ Description: Our previous high-intelligence model
10
+ Capabilities: Text and image input, text output
11
+ Context Length: 128k tokens
12
+ Pricing: $10 | Output: $30* (per 1 million tokens)
13
+ GPT-3.5 Turbo
14
+ Description: Our fast, inexpensive model for simple tasks
15
+ Capabilities: Text input, text output
16
+ Context Length: 16k tokens
17
+ Pricing: $0.50 | Output: $1.50* (per 1 million tokens)
18
+ Detailed Models Description
19
+ GPT-4o
20
+ GPT-4o (“o” for “omni”) is our most advanced model. It is multimodal (accepting text or image inputs and outputting text), and it has the same high intelligence as GPT-4 Turbo but is much more efficient—it generates text 2x faster and is 50% cheaper. Additionally, GPT-4o has the best vision and performance across non-English languages of any of our models. GPT-4o is available in the OpenAI API to paying customers. Learn how to use GPT-4o in our text generation guide.","Reproducible Outputs (Beta)
21
+ Chat Completions are non-deterministic by default (which means model outputs may differ from request to request). That being said, we offer some control towards deterministic outputs by giving you access to the seed parameter and the system_fingerprint response field."
22
+ ,"Flagship Models
23
+ GPT-4o (New)
24
+ Description: Our fastest and most affordable flagship model
25
+ Capabilities: Text and image input, text output
26
+ Context Length: 128k tokens
27
+ Pricing: $5 | Output: $15* (per 1 million tokens)
28
+ GPT-4 Turbo
29
+ Description: Our previous high-intelligence model
30
+ Capabilities: Text and image input, text output
31
+ Context Length: 128k tokens
32
+ Pricing: $10 | Output: $30* (per 1 million tokens)
33
+ GPT-3.5 Turbo
34
+ Description: Our fast, inexpensive model for simple tasks
35
+ Capabilities: Text input, text output
36
+ Context Length: 16k tokens
37
+ Pricing: $0.50 | Output: $1.50* (per 1 million tokens)
38
+ *Prices per 1 million tokens","Deterministic Outputs
39
+ To receive (mostly) deterministic outputs across API calls, you can:"
40
+ ,"def get_database_info(conn):
41
+ """"""Return a list of dicts containing the table name and columns for each table in the database.""""""
42
+ table_dicts = []
43
+ for table_name in get_table_names(conn):
44
+ columns_names = get_column_names(conn, table_name)
45
+ table_dicts.append({""table_name"": table_name, ""column_names"": columns_names})
46
+ return table_dicts
47
+ Step 2: Extract Database Schema
48
+ python
49
+ Copy code
50
+ database_schema_dict = get_database_info(conn)
51
+ database_schema_string = ""\n"".join(
52
+ [
53
+ f""Table: {table['table_name']}\nColumns: {', '.join(table['column_names'])}""
54
+ for table in database_schema_dict
55
+ ]
56
+ )
57
+ Step 3: Define Function Specification
58
+ python
59
+ Copy code
60
+ tools = [
61
+ {
62
+ ""type"": ""function"",
63
+ ""function"": {
64
+ ""name"": ""ask_database"",
65
+ ""description"": ""Use this function to answer user questions about music. Input should be a fully formed SQL query."",
66
+ ""parameters"": {
67
+ ""type"": ""object"",
68
+ ""properties"": {
69
+ ""query"": {
70
+ ""type"": ""string"",
71
+ ""description"": f""""""
72
+ SQL query extracting info to answer the user's question.
73
+ SQL should be written using this database schema:
74
+ {database_schema_string}
75
+ The query should be returned in plain text, not in JSON.
76
+ """""",
77
+ }
78
+ },
79
+ ""required"": [""query""],
80
+ },
81
+ }
82
+ }
83
+ ]
84
+ Step 4: Implement SQL Query Function
85
+ python
86
+ Copy code
87
+ def ask_database(conn, query):
88
+ """"""Function to query SQLite database with a provided SQL query.""""""
89
+ try:
90
+ results = str(conn.execute(query).fetchall())
91
+ except Exception as e:
92
+ results = f""query failed with error: {e}""
93
+ return results
94
+ Step 5: Invoke Function Call Using Chat Completions API
95
+ python
96
+ Copy code
97
+ # Step 1: Prompt with content that may result in function call
98
+ messages = [{""role"": ""user"", ""content"": ""What is the name of the album with the most tracks?""}]
99
+ response = client.chat.completions.create(
100
+ model='gpt-4o',
101
+ messages=messages,
102
+ tools=tools,
103
+ tool_choice=""auto""
104
+ )
105
+ response_message = response.choices[0].message
106
+ messages.append(response_message)
107
+ pretty_print_conversation(messages)","Example Deterministic Output API Call
108
+ Explore the new seed parameter in the OpenAI cookbook."
109
+ ,"python
110
+ Copy code
111
+ thread = client.beta.threads.create(
112
+ messages=[
113
+ {
114
+ ""role"": ""user"",
115
+ ""content"": ""Create 3 data visualizations based on the trends in this file."",
116
+ ""attachments"": [
117
+ {
118
+ ""file_id"": file.id,
119
+ ""tools"": [{""type"": ""code_interpreter""}]
120
+ }
121
+ ]
122
+ }
123
+ ]
124
+ )
125
+ Image Input Content
126
+ Message content can contain either external image URLs or File IDs uploaded via the File API. Only models with Vision support can accept image input. Supported image content types include png, jpg, gif, and webp. When creating image files, pass purpose=""vision"" to allow you to later download and display the input content.","json
127
+ Copy code
128
+ {
129
+ ""id"": ""run_qJL1kI9xxWlfE0z1yfL0fGg9"",
130
+ ...
131
+ ""status"": ""requires_action"",
132
+ ""required_action"": {
133
+ ""submit_tool_outputs"": {
134
+ ""tool_calls"": [
135
+ {
136
+ ""id"": ""call_FthC9qRpsL5kBpwwyw6c7j4k"",
137
+ ""function"": {
138
+ ""arguments"": ""{\""location\"": \""San Francisco, CA\""}"",
139
+ ""name"": ""get_rain_probability""
140
+ },
141
+ ""type"": ""function""
142
+ },
143
+ {
144
+ ""id"": ""call_RpEDoB8O0FTL9JoKTuCVFOyR"",
145
+ ""function"": {
146
+ ""arguments"": ""{\""location\"": \""San Francisco, CA\"", \""unit\"": \""Fahrenheit\""}"",
147
+ ""name"": ""get_current_temperature""
148
+ },
149
+ ""type"": ""function""
150
+ }
151
+ ]
152
+ },
153
+ ...
154
+ ""type"": ""submit_tool_outputs""
155
+ }
156
+ }
157
+ Step 4: Handle Tool Calls and Submit Outputs
158
+ How you initiate a Run and submit tool_calls will differ depending on whether you are using streaming or not, although in both cases all tool_calls need to be submitted at the same time. You can then complete the Run by submitting the tool outputs from the functions you called. Pass each tool_call_id referenced in the required_action object to match outputs to each function call."
159
+ ,"json
160
+ Copy code
161
+ {
162
+ ""id"": ""msg_abc123"",
163
+ ""object"": ""thread.message"",
164
+ ""created_at"": 1699073585,
165
+ ""thread_id"": ""thread_abc123"",
166
+ ""role"": ""assistant"",
167
+ ""content"": [
168
+ {
169
+ ""type"": ""text"",
170
+ ""text"": {
171
+ ""value"": ""The rows of the CSV file have been shuffled and saved to a new CSV file. You can download the shuffled CSV file from the following link:\n\n[Download Shuffled CSV File](sandbox:/mnt/data/shuffled_file.csv)"",
172
+ ""annotations"": [
173
+ {
174
+ ""type"": ""file_path"",
175
+ ""text"": ""sandbox:/mnt/data/shuffled_file.csv"",
176
+ ""start_index"": 167,
177
+ ""end_index"": 202,
178
+ ""file_path"": {
179
+ ""file_id"": ""file-abc123""
180
+ }
181
+ }
182
+ ]
183
+ }
184
+ }
185
+ ]
186
+ }
187
+ Input and Output Logs of Code Interpreter
188
+ By listing the steps of a Run that called Code Interpreter, you can inspect the code input and output logs of Code Interpreter:","# The thread now has a vector store with that file in its tool resources.
189
+ print(thread.tool_resources.file_search)
190
+ Step 5: Create a Run and Check the Output
191
+ Now, create a Run and observe that the model uses the File Search tool to provide a response to the user’s question."
192
+ ,"$5.00 / 1M tokens
193
+ Output: $15.00 / 1M tokens
194
+ gpt-4o-2024-05-13",
195
+ ,"$5.00 / 1M tokens
196
+ Output: $15.00 / 1M tokens
197
+ Vision Pricing Calculator
198
+ Resolution: 150px x 150px
199
+ Price: $0.001275
200
+ GPT-3.5 Turbo
201
+ GPT-3.5 Turbo is optimized for dialog, fast, and inexpensive for simple tasks.",
202
+ ,"$0.50 / 1M tokens
203
+ Output: $1.50 / 1M tokens
204
+ gpt-3.5-turbo-instruct",
205
+ ,"$1.50 / 1M tokens
206
+ Output: $2.00 / 1M tokens
207
+ Embedding Models
208
+ Build advanced search, clustering, topic modeling, and classification functionality.",
209
+ ,"Training: $8.00 / 1M tokens
210
+ Input Usage: $3.00 / 1M tokens
211
+ Output Usage: $6.00 / 1M tokens
212
+ davinci-002",
213
+ ,"Training: $6.00 / 1M tokens
214
+ Input Usage: $12.00 / 1M tokens
215
+ Output Usage: $12.00 / 1M tokens
216
+ babbage-002",
217
+ ,"Training: $0.40 / 1M tokens
218
+ Input Usage: $1.60 / 1M tokens
219
+ Output Usage: $1.60 / 1M tokens
220
+ Assistants API
221
+ The Assistants API and its tools are billed at the chosen language model's per-token input/output rates. Additional fees for tool usage:",
222
+ ,"$10.00 / 1M tokens
223
+ Output: $30.00 / 1M tokens
224
+ gpt-4-turbo-2024-04-09",
225
+ ,"$10.00 / 1M tokens
226
+ Output: $30.00 / 1M tokens
227
+ gpt-4",
228
+ ,"$30.00 / 1M tokens
229
+ Output: $60.00 / 1M tokens
230
+ gpt-4-32k",
231
+ ,"$60.00 / 1M tokens
232
+ Output: $120.00 / 1M tokens
233
+ gpt-4-0125-preview",
234
+ ,"$10.00 / 1M tokens
235
+ Output: $30.00 / 1M tokens
236
+ gpt-4-1106-preview",
237
+ ,"$10.00 / 1M tokens
238
+ Output: $30.00 / 1M tokens
239
+ gpt-4-vision-preview",
240
+ ,"$10.00 / 1M tokens
241
+ Output: $30.00 / 1M tokens
242
+ gpt-3.5-turbo-1106",
243
+ ,"$1.00 / 1M tokens
244
+ Output: $2.00 / 1M tokens
245
+ gpt-3.5-turbo-0613",
246
+ ,"$1.50 / 1M tokens
247
+ Output: $2.00 / 1M tokens
248
+ gpt-3.5-turbo-16k-0613",
249
+ ,"$3.00 / 1M tokens
250
+ Output: $4.00 / 1M tokens
251
+ gpt-3.5-turbo-0301",
252
+ ,"$1.50 / 1M tokens
253
+ Output: $2.00 / 1M tokens
254
+ davinci-002",
255
+ ,"$2.00 / 1M tokens
256
+ Output: $2.00 / 1M tokens
257
+ babbage-002",
258
+ ,"$0.40 / 1M tokens
259
+ Output: $0.40 / 1M tokens
260
+ FAQ
261
+ What’s a token?
262
+ Tokens are pieces of words used for natural language processing. For English text, 1 token is approximately 4 characters or 0.75 words. As a reference, the collected works of Shakespeare are about 900,000 words or 1.2M tokens.",