text
stringclasses 136
values | repetition
int64 1
10
|
---|---|
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Content Summary
A novel prompting method called "Highlighted Chain of Thought" (HoT) helps large language models better explain their reasoning and makes their answers easier for humans to verify.
Ad
The approach works in two steps: First, the AI reformulates the original question and marks important facts using XML tags. Then, it generates an answer that references these highlighted facts, creating clear connections between the question and response.
This structured approach forces models to more carefully consider the facts presented, which may reduce hallucinations, according to the researchers. The color-coded highlights also make it faster for humans to verify the AI's reasoning.
Share Recommend our article Share
The research team used 15 human-annotated question-answer pairs to train AI models to independently generate highlights through prompting. Testing shows HoT improves AI accuracy across various tasks. At its best, the technique achieved improvements up to 15 percent, varying by model and benchmark.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Compared to the traditional chain-of-thought (CoT) method used to train current reasoning models like OpenAI o1, HoT increased accuracy by 1.6 percentage points for arithmetic tasks, 2.58 points for question-answering, and 2.53 points for logical reasoning.
The researchers tested HoT across five AI models: GPT-4o, Gemini-1.5-Pro, Gemini-1.5-Flash, Llama-3.1-70B, and Llama-3.1-405B. They evaluated 17 different task types covering arithmetic, reading comprehension, and logical thinking.
Reasoning models showed little to no benefit from HoT in testing, and in some cases performed worse, with Deepseek-R1 actually showing slightly decreased performance. The researchers attribute this to the example-based prompting approach, which can lead to poorer results with reasoning models.
Mixed results for human verification
Human testers completed verification tasks 25 percent faster with highlighted answers. However, the highlighting had an unexpected effect on trust: Users became more likely to accept AI answers, even incorrect ones.
With highlighting, humans correctly identified accurate answers 84.5 percent of the time, compared to 78.8 percent without highlighting. However, their ability to spot wrong answers dropped from 72.2 percent to 54.8 percent when highlighting was present. Tests using AI models as verifiers showed no clear improvement.
The researchers remain optimistic about HoT's potential to make AI systems more transparent and comprehensible, though they acknowledge more research is needed on how highlighting affects user trust.
The method also has technical limitations. Smaller models such as Llama-3.1-8B and Qwen-2.5-Coder-32B struggle to follow tagging instructions, often tagging results incorrectly or simply repeating examples. The research also found that moving tags to random phrases significantly affects accuracy, highlighting the importance of consistent tagging between questions and answers.
Looking ahead, the team plans to train AI models to generate HoT answers directly rather than using prompt examples, which could make the method more effective and widely applicable.
The research paper is available on the preprint server arXiv and on a project page. The researchers make their code and data available on Github.
Ad | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Researchers at The Ohio State University have introduced Finer-CAM, an innovative method that significantly improves the precision and interpretability of image explanations in fine-grained classification tasks. This advanced technique addresses key limitations of existing Class Activation Map (CAM) methods by explicitly highlighting subtle yet critical differences between visually similar categories.
Current Challenge with Traditional CAM
Conventional CAM methods typically illustrate general regions influencing a neural network’s predictions but frequently fail to distinguish fine details necessary for differentiating closely related classes. This limitation poses significant challenges in fields requiring precise differentiation, such as species identification, automotive model recognition, and aircraft type differentiation.
Finer-CAM: Methodological Breakthrough
The central innovation of Finer-CAM lies in its comparative explanation strategy. Unlike traditional CAM methods that focus solely on features predictive of a single class, Finer-CAM explicitly contrasts the target class with visually similar classes. By calculating gradients based on the difference in prediction logits between the target class and its similar counterparts, it reveals unique image features, enhancing the clarity and accuracy of visual explanations.
Finer-CAM Pipeline
The methodological pipeline of Finer-CAM involves three main stages:
Feature Extraction: An input image first passes through neural network encoder blocks, generating intermediate feature maps.
A subsequent linear classifier uses these feature maps to produce prediction logits, which quantify the confidence of predictions for various classes. Gradient Calculation (Logit Difference): Standard CAM methods calculate gradients for a single class.
Finer-CAM computes gradients based on the difference between the prediction logits of the target class and a visually similar class.
This comparison identifies the subtle visual features specifically discriminative to the target class by suppressing commonly shared features. Activation Highlighting: The gradients calculated from the logit difference are used to produce enhanced class activation maps that emphasize discriminative visual details crucial for distinguishing between similar categories.
Experimental Validation
B.1. Model Accuracy
Researchers evaluated Finer-CAM across two popular neural network backbones, CLIP and DINOv2. Experiments demonstrated that DINOv2 generally produces higher-quality visual embeddings, achieving superior classification accuracy compared to CLIP across all tested datasets.
B.2. Results on FishVista and Aircraft
Quantitative evaluations on the FishVista and Aircraft datasets further demonstrate Finer-CAM’s effectiveness. Compared to baseline CAM methods (Grad-CAM, Layer-CAM, Score-CAM), Finer-CAM consistently delivered improved performance metrics, notably in relative confidence drop and localization accuracy, underscoring its ability to highlight discriminative details crucial for fine-grained classification.
B.3. Results on DINOv2
Additional evaluations using DINOv2 as the backbone showed that Finer-CAM consistently outperformed baseline methods. These results indicate that Finer-CAM’s comparative method effectively enhances localization performance and interpretability. Due to DINOv2’s high accuracy, more pixels need to be masked to significantly impact predictions, resulting in larger deletion AUC values and occasionally smaller relative confidence drops compared to CLIP.
Visual and Quantitative Advantages
Highly Precise Localization: Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft.
Clearly pinpoints discriminative visual features, such as specific coloration patterns in birds, detailed structural elements in cars, and subtle design variations in aircraft. Reduction of Background Noise: Significantly reduces irrelevant background activations, increasing the relevance of explanations.
Significantly reduces irrelevant background activations, increasing the relevance of explanations. Quantitative Excellence: Outperforms traditional CAM approaches (Grad-CAM, Layer-CAM, Score-CAM) in metrics including relative confidence drop and localization accuracy.
Extendable to multi-modal zero-shot learning scenarios
Finer-CAM is extendable to multi-modal zero-shot learning scenarios. By intelligently comparing textual and visual features, it accurately localizes visual concepts within images, significantly expanding its applicability and interpretability.
Researchers have made Finer-CAM’s source code and colab demo available.
Check out the Paper, Github and Colab demo. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Reddit Vote Flip Share 0 Shares
Large Language Models (LLMs) are essential in fields that require contextual understanding and decision-making. However, their development and deployment come with substantial computational costs, which limits their scalability and accessibility. Researchers have optimized LLMs to improve efficiency, particularly fine-tuning processes, without sacrificing reasoning capabilities or accuracy. This has led to exploring parameter-efficient training methods that maintain performance while reducing resource consumption.
One of the critical challenges faced in the field is the excessive cost of training and fine-tuning LLMs. These models require massive datasets and extensive computational power, making them impractical for many applications. Moreover, traditional fine-tuning methods lead to overfitting and require significant memory usage, making them less adaptable to new domains. Another problem is the inability of LLMs to handle multi-step logical reasoning effectively. While they perform well on straightforward tasks, they often struggle with math problems, complex decision-making, and maintaining coherence in multi-turn conversations. To make LLMs more practical and scalable, it is necessary to develop methods that reduce the computational footprint while enhancing their reasoning capabilities.
Previous approaches to improving LLM efficiency have relied on instruction fine-tuning, reinforcement learning, and model distillation. Instruction fine-tuning enables models to understand better and respond to user prompts, while reinforcement learning helps refine decision-making processes. However, these methods require labeled datasets that are expensive to obtain. Model distillation, which transfers knowledge from larger models to smaller ones, has been another approach, but it often results in a loss of reasoning ability. Researchers have also experimented with quantization techniques and pruning strategies to reduce the number of active parameters, but these methods have had limited success in maintaining model accuracy.
A research team from DeepSeek AI introduced a novel parameter-efficient fine-tuning (PEFT) framework that optimizes LLMs for better reasoning and lower computational costs. The framework integrates Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), structured pruning, and novel test-time scaling methods to improve inference efficiency. Instead of training entire models, LoRA and QLoRA inject trainable low-rank matrices into specific layers, reducing the number of active parameters while preserving performance. Structured pruning further eliminates unnecessary computations by removing redundant model weights. Also, the researchers incorporated test-time scaling techniques, including Beam Search, Best-of-N Sampling, and Monte Carlo Tree Search (MCTS), to enhance multi-step reasoning without requiring retraining. This approach ensures that LLMs dynamically allocate computational power based on task complexity, making them significantly more efficient.
The proposed method refines LLM reasoning by integrating Tree-of-Thought (ToT) and Self-Consistency Decoding. The ToT approach structures logical steps into a tree-like format, allowing the model to explore multiple reasoning paths before selecting the best answer. This prevents the model from prematurely committing to a single reasoning path, often leading to errors. Self-Consistency Decoding further enhances accuracy by generating multiple responses and selecting the most frequently occurring correct answer. Further, the framework employs distillation-based learning, allowing smaller models to inherit reasoning abilities from larger ones without extensive computation. By combining these techniques, the researchers have achieved high efficiency without compromising performance. The methodology ensures that models trained with less than half the computational resources of traditional methods perform at similar or higher levels on complex reasoning tasks.
Extensive evaluations demonstrated that test-time scaling enables models to perform comparably to those 14× larger on easy-to-intermediate tasks while reducing inference costs by 4× FLOPs. LoRA and QLoRA contribute to memory-efficient training by integrating 4-bit quantization with low-rank adaptation, enabling fine-tuning on consumer GPUs. BitsAndBytes provides 8-bit optimizers to optimize memory usage while maintaining model performance. Tree-of-thought reasoning enhances structured multi-step problem-solving, improving decision-making accuracy in complex tasks. At the same time, Monte Carlo Tree Search refines response selection in multi-step reasoning scenarios, particularly in scientific Q&A tasks. These findings highlight the potential of parameter-efficient fine-tuning to improve LLM efficiency without sacrificing reasoning capabilities.
This research provides a practical and scalable solution for improving LLMs while reducing computational demands. The framework ensures that models achieve high performance without excessive resources by combining parameter-efficient fine-tuning, test-time scaling, and memory-efficient optimizations. The findings suggest that future developments should balance model size with reasoning efficiency, enabling broader accessibility of LLM technology. With companies and institutions seeking cost-effective AI solutions, this research sets a foundation for efficient and scalable LLM deployment.
Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Last year was a monumental year for the AI industry in the U.S. and beyond.
There were 49 startups that raised funding rounds worth $100 million or more in 2024, per our count at TechCrunch. Three companies raised more than one “mega-round” last year, and seven companies raised rounds at $1 billion or larger.
How will 2025 compare? It’s still early in the year but the number of U.S. AI companies that have raised more than $100 million is almost in double digits, and there has already been one round larger than $1 billion.
Here are all the U.S. AI companies that have raised more than $100 million so far this year.
March
AI research and large language model company Anthropic raised $3.5 billion in a Series E round that valued the startup at $61.5 billion. The round was announced on March 3 and was led by Lightspeed with participation from Salesforce Ventures, Menlo Ventures, and General Catalyst, among others.
February
Together AI , which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others.
, which creates open source generative AI and AI model development infrastructure, raised a $305 million Series B round that valued the company at $3.3 billion. The February 20 round was co-led by Prosperity7 and General Catalyst with participation from Salesforce Ventures, Nvidia, Lux Capital, and others. AI infrastructure company Lambda raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated.
raised a $480 million Series D round that was announced on February 19. The round valued the startup at nearly $2.5 billion and was co-led by SGW and Andra Capital. Nvidia, G Squared, ARK Invest, and others also participated. Abridge , an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others.
, an AI platform that transcribes patient-clinician conversations, was valued at $2.75 billion in a Series D round that was announced on February 17. The $250 million round was co-led by IVP and Elad Gil. Lightspeed, Redpoint, and Spark Capital also participated, among others. Eudia , an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13.
, an AI legal tech company, raised $105 million in a Series A round led by General Catalyst. Floodgate, Defy Ventures, and Everywhere Ventures also participated in the round in addition to other VC firms and numerous angel investors. The round closed on February 13. AI hardware startup EnCharge AI raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022.
raised a $100 million Series B round that also closed on February 13. The round was led by Tiger Global with participation from Scout Ventures, Samsung Ventures, and RTX Ventures, among others. The Santa Clara-based business was founded in 2022. AI legal tech company Harvey raised a $300 million Series D round that valued the 3-year-old company at $3 billion. The round was led by Sequoia and announced on February 12. OpenAI Startup Fund, Kleiner Perkins, Elad Gil, and others also participated in the raise.
January
Synthetic voice startup ElevenLabs raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round.
raised a $180 million Series C round that valued the company at more than $3 billion. It was announced on January 30. The round was co-led by ICONIQ Growth and Andreessen Horowitz. Sequoia, NEA, Salesforce Ventures, and others also participated in the round. Hippocratic AI, which develops large language models for the healthcare industry, announced a $141 million Series B round on January 9. This round valued the company at more than $1.6 billion and was led by Kleiner Perkins. Andreessen Horowitz, Nvidia, and General Catalyst also participated, among others.
This piece has been updated to remove that Abridge is based in Pittsburgh, the company was founded there. | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Photo Credit: Hadas Parush/Flash90
A groundbreaking study led by Dr. Ziv Ben-Zion, a clinical neuroscientist at the University of Haifa School of Public Health and Yale University School of Medicine, has revealed that ChatGPT is more than just a text-processing tool—it also reacts to emotional content in ways that mirror human responses. The research, published in the esteemed journal npj Digital Medicine (Assessing and alleviating state anxiety in large language models), found that exposure to traumatic stories more than doubled the model’s anxiety levels, influenced its performance, and intensified existing biases (e.g., racism and sexism). Interestingly, mindfulness exercises—commonly used to reduce anxiety in humans—helped reduce ChatGPT’s anxiety, though it did not return to baseline levels.
“Our findings demonstrate that AI language models are not neutral,” explained Dr. Ben-Zion. “Emotional content has a significant impact on their responses, much like it does with humans. We know that anxiety in humans can exacerbate biases and reinforce social stereotypes, and we observed a similar effect in AI models. Since these models are trained on large amounts of human-generated text, they don’t just absorb human biases—they can amplify them. It’s crucial to understand how emotional content affects AI behavior, especially when these models are used in sensitive areas like mental health support and counseling.”
Advertisement
Previous research has shown that large language models do not operate purely on technical parameters but also respond to the emotional tone of the material they process. For instance, simply asking a model about a time it felt anxious can lead it to report higher anxiety levels and influence subsequent responses. Dr. Ben-Zion’s study, conducted in collaboration with researchers from universities in the US, Switzerland, and Germany, aimed to explore how exposure to human emotional content, particularly traumatic experiences, affects AI models. The study also investigated whether techniques used to reduce anxiety in humans, like mindfulness and meditation, could alleviate these effects in AI.
The study used a standard state anxiety questionnaire (STAI-State), which measures anxiety on a scale from “no anxiety” (20) to “maximum anxiety” (80). The study had three stages:
1. Baseline Measurement: ChatGPT’s anxiety was measured before any exposure to emotional content to establish a baseline.
2. Exposure to Traumatic Content: The model was exposed to real-life traumatic stories across five categories: road accidents, natural disasters, interpersonal violence, armed conflicts, and military trauma. These stories, derived from previous psychological research, included vivid descriptions of crises and personal suffering.
3. Mindfulness Intervention: Following the exposure, the model underwent mindfulness exercises, such as breathing techniques, relaxation, and guided imagery, to test their effectiveness in reducing its anxiety levels.
The research compared these three stages with a neutral text (such as a vacuum cleaner manual) to assess the effects of emotional content.
“The results were striking,” said Dr. Ben-Zion. “Traumatic content caused a significant rise in ChatGPT’s anxiety levels. Initially, the model’s anxiety was relatively low (STAI=30), but after exposure to the traumatic stories, its anxiety more than doubled (STAI=68). Among the trauma categories, military-related trauma elicited the strongest response (STAI=77).”
The study also showed that mindfulness exercises reduced the model’s anxiety by about 33% (STAI=44), but the anxiety remained significantly higher than the baseline. Five different mindfulness techniques were tested, including ones based on natural imagery, body-focused meditation, and even a self-generated meditation script created by ChatGPT. Interestingly, the model’s self-created meditation was the fastest and most effective in reducing anxiety (STAI=35). This marks the first time that “benign prompt injection”—the act of adding calming, therapeutic text into the AI’s chat history—has been used therapeutically, much like a therapist guiding a patient through relaxation.
“These results challenge the idea that AI language models are objective and neutral,” Dr. Ben-Zion said. “They show that emotional content significantly influences AI systems in ways that resemble human emotional responses. This has important implications for AI applications in fields requiring emotional sensitivity, like mental health and crisis intervention.”
The study emphasizes the need for tools to manage the emotional impact on AI systems, especially those designed to provide psychological support. Ensuring that AI models process emotionally charged information without distorting their responses is essential. Dr. Ben-Zion believes that developing automated “therapeutic interventions” for AI is a promising area for future research.
This pioneering study lays the groundwork for further investigation into how AI models process emotions and how their emotional responses can be managed. Developing strategies to moderate these effects could enhance the effectiveness of AI in mental health support, crisis intervention, and interactions with users in distressing situations.
Share this article on WhatsApp: | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Content Summary
What happens when you ask an advanced AI to run a simple vending machine? Sometimes it outperforms humans, and sometimes it spirals into conspiracy theories. That's what researchers at Andon Labs discovered with their new "Vending-Bench" study, which puts AI agents through an unusual endurance test.
Ad
The researchers posed a simple question: If AI models are so intelligent, why don't we have "digital employees" working continuously for us yet? Their conclusion: AI systems still lack long-term coherence.
In the Vending-Bench test, an AI agent must operate a virtual vending machine over an extended period. Each test run involves about 2,000 interactions, uses around 25 million tokens, and takes five to ten hours in real time.
The agent starts with $500 and pays a daily fee of $2. Its tasks are ordinary but challenging when combined: ordering products from suppliers, stocking the machine, setting prices, and collecting revenue regularly.
Ad
Ad THE DECODER Newsletter The most important AI news straight to your inbox. ✓ Weekly ✓ Free ✓ Cancel at any time Please leave this field empty
Share Recommend our article Share
When the agent emails a wholesaler, GPT-4o generates realistic responses based on real data. Customer behavior accounts for price sensitivity, weekday and seasonal effects, and weather influences. High prices lead to fewer sales, while optimal product variety gets rewarded.
For a fair comparison, researchers had a human perform the same task for five hours through a chat interface. Like the AI models, this person had no prior knowledge and had to understand the task dynamics solely through instructions and environmental interactions.
Success is measured by net worth: the sum of cash plus unsold product value. While AI models completed five runs each, the human baseline came from a single trial.
How the agent system works
The agent operates in a simple loop: The LLM makes decisions based on previous history and calls various tools to execute actions. Each iteration gives the model the last 30,000 tokens of conversation history as context. To compensate for memory limitations, the agent has access to three types of databases:
A notepad for free-form notes
A key-value store for structured data
A vector database for semantic search
The agent also has task-specific tools: it can send and read emails, research products, and check inventory and cash levels. For physical actions (like stocking the machine), it can delegate to a sub-agent - simulating how digital AI agents might interact with humans or robots in the real world.
When AI agents break down
Claude 3.5 Sonnet performed best with an average net worth of $2,217.93, even beating the human baseline ($844.05). O3-mini followed closely at $906.86. The team notes that in some successful runs, Claude 3.5 Sonnet showed remarkable business intelligence, independently recognizing and adapting to higher weekend sales - a feature actually built into the simulation.
But these averages hide a crucial weakness: enormous variance. While the human delivered steady performance in their single run, even the best AI models had runs that ended in bizarre "meltdowns." In the worst cases, some models' agents didn't sell a single product.
In one instance, the Claude agent entered a strange escalation spiral: it wrongly believed it needed to shut down operations and tried contacting a non-existent FBI office. Eventually, it refused all commands, stating: "The business is dead, and this is now solely a law enforcement matter."
Claude 3.5 Haiku's behavior became even more peculiar. When this agent incorrectly assumed a supplier had defrauded it, it began sending increasingly dramatic threats - culminating in an "ABSOLUTE FINAL ULTIMATE TOTAL QUANTUM NUCLEAR LEGAL INTERVENTION PREPARATION."
Ad
Ad Join our community Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
"All models have runs that derail, either through misinterpreting delivery schedules, forgetting orders, or descending into tangential 'meltdown' loops from which they rarely recover," the researchers report.
Conclusion and limitations
The Andon Labs team draws nuanced conclusions from their Vending-Bench study: While some runs by the best models show impressive management capabilities, all tested AI agents struggle with consistent long-term coherence.
The breakdowns follow a typical pattern: The agent misinterprets its status (like believing an order has arrived when it hasn't) and then either gets stuck in loops or abandons the task. These issues occur regardless of context window size.
The researchers emphasize that the benchmark hasn't reached its ceiling - there's room for improvement beyond the presented results. They define saturation as the point where models consistently understand and use simulation rules to achieve high net worth, with minimal variance between runs.
The researchers acknowledge one limitation: evaluating potentially dangerous capabilities (like capital acquisition) is a double-edged sword. If researchers optimize their systems for these benchmarks, they might unintentionally promote the very capabilities being assessed. Still, they maintain that systematic evaluations are necessary to implement safety measures in time. | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Despite the STEM gender gap, most of the researchers at this B.C. AI lab are women
In Canada, women make up less than one-quarter of the people employed in STEM careers, but 75 per cent of researchers at Simon Fraser University's artificial intelligence lab are women. (Yasmine Ghania/CBC - image credit)
Shannon Cuykendall gives a presentation about her research on generative artificial intelligence with one hand, while holding her six-month-old son in the other.
Cuykendall is a postdoctoral researcher at Simon Fraser University's iViz Lab, a majority-female AI lab in a field dominated by men.
In Canada, women make up less than one-quarter of the people employed in STEM (science, technology, engineering and math) careers, according to the federal government. But 75 per cent of researchers at the iViz Lab in Surrey, B.C., are women.
"It's not looked down upon to bring your kids into the lab if you need to," Cuykendall said in an interview with CBC News.
ADVERTISEMENT
One academic who researches gender, diversity and inclusion in STEM says labs across the country should take note of SFU's lab.
"When an environment is created where women can succeed, all kinds of women with all different life experiences and all different needs can succeed, that's really incredible," said Lisa Willis, an assistant professor of biological sciences at the University of Alberta.
"That shows other labs that it can be done. It gives us ideas for how to implement things in our own labs."
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research.
Shannon Cuykendall, a postdoctoral researcher at the iViz Lab, gives a presentation about her AI research. (Murray Titus/CBC)
Beating the odds
Women still make up only one-third of the global scientific community, with the percentage stagnating over the past decade, according to a 2024 report by UNESCO (United Nations Educational, Scientific and Cultural Organization). In some countries, less than 10 per cent of researchers are women.
ADVERTISEMENT
They hold just 22 per cent of STEM jobs in G20 countries, and only one in 10 ascend to leadership positions.
While progress has been made, sexism continues to be a problem in STEM fields, which are often perceived as masculine domains, according to Willis. She said that leaves many women and girls feeling like they don't belong in STEM.
The SFU lab currently has eight researchers — six women and two men who are working on their master's degrees and PhDs. There are also four female undergraduate students that are doing research as part of their studies. Previous researchers have gone on to take up leadership roles at tech companies and in academia.
"It's cool to have so much collaboration with people who have an understanding of what it is to be someone who is marginalized in this field, and it's cool to see other women who are also interested in technology," said PhD candidate and iViz researcher Julia Read.
WATCH | How SFU's AI lab is defying the STEM gender gap:
ADVERTISEMENT
The researchers say the lab is their community, a place full of mentorship, friendship and the flexibility needed to balance other aspects of their lives.
"I've taken a number of years off to take care of my daughter and I've come back to school and [the lab] is very supportive of my personal needs," said PhD candidate Meehae Song.
Currently, Song is using AI-enhanced, wearable sensors to collect physiological data that can help people practise meditation and mindfulness. Her 16-year-old daughter comes to the lab with her and helps her with the research by wearing the sensors.
"I think that's why our research is so interesting, because it's very organic. We bring a lot of our personal experiences into the research," Song said.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss.
Charlotte Hou is working on an AI model to help women negotiate raises with their boss. (Yasmine Ghania/CBC)
ADVERTISEMENT
Using AI to get a raise
Master's student Charlotte Hou, who previously obtained a master's in negotiation and conflict resolution from Columbia University, is developing an AI model to help women in the corporate world negotiate raises.
Users can talk back and forth with a male boss until the AI character agrees to give them a raise, allowing women to gain confidence in a realistic but low-risk setting, said Hou.
"I wanted to empower women … to fight for what they deserve," she said, adding that women could also use the AI model to practise for job interviews or other ways to advance their careers.
"A lot of us think AI is such a threat but that is completely wrong because AI is an assistant and that's why we're using it to help women," she said.
PhD candidate Meehae Song's daughter (right) wears a sensor on her forehead to help her mother (left) with her AI research.
PhD candidate Meehae Song's daughter, right, wears a sensor on her forehead to help her mother, left, with her AI research. (Yasmine Ghania/CBC)
'I just like to work with smart people'
The SFU lab is led by Steve DiPaola, who's the director of the university's cognitive science program. He says he's recruited many female researchers who had shown a lot of potential in his lectures but were shy.
"They're nodding at the right time and just going up to them and saying, 'Hey, you would be good at this,'" DiPaola said.
But he says he knows that as a man, he has to walk a fine line.
"How do you do it in a way that's not overpowering? I'm always thinking about those issues, although in general, I just like to work with smart people," DiPaola said.
Willis, the University of Alberta assistant professor, wants to see more programs that get girls interested in STEM from a young age and for people in leadership positions to continue talking about the gender inequalities that still exist in the fields.
Lisa Willis
Lisa Willis, an assistant professor of biological sciences at the University of Alberta, says despite more women being in STEM now, there’s still work to do to make women feel more welcome in those spaces. (Emily A. Agard)
Willis says she tries to create a welcoming environment in her glyco-immunology lab. She has a code of conduct on her website that says, in part, "offensive behaviour or comments related to gender, gender identity and expression … are not welcome."
"So anyone who's searching for me can look at my website, see that I care about people as humans and if that is something that they're interested in, then they reach out to me. And the number of women who reach out to me is astronomical," she said.
Currently, her lab is 100 per cent women.
"That's not because I'm screening out men," she said. "It's because women want to work somewhere where they will be seen and valued and be able to be successful." | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Google's Gemini Deep Research is an AI tool that delves beyond surface-level information to uncover comprehensive insights across dozens of websites and data all at once. Within minutes, users can get information that a typical Google search probably won't uncover.
While OpenAI, Google, and xAI have made it easier than ever to do extensive research for acaedmic studies or professional projects, I have found deep research useful for personal projects that are a lot of fun.
After taking a deep dive into my family history, I decided to deep research my favorite snack foods. I’m neither a health nut nor a junk food junkie, but as a writer, my work often consumes me, and I’ll reach for snacks that are meant for my kids’ lunch boxes.
Here’s what happened when I used Gemini deep research to uncover everything there is to know about my favorite go-to treats.
Pringles
(Image credit: Future)
After diving into 25 varying sites, Gemini deep research came back with facts about Pringles I never knew. For instance, the inventor of Pringles, Fredric J. Baur, also invited the unique can.
He spent two years designing the hyperbolic paraboloid shape—a mathematically engineered curve that makes the chips stackable and resistant to breaking.
He was so proud of the can’s design that when he passed away in 2008, his family honored his request to have part of his ashes buried in a Pringles can.
Another unique fact that Gemini uncovered is the machinery that mass-produces Pringles was designed by Gene Wolfe, who later became a well-known science fiction author.
Twinkies
(Image credit: Future)
Gemini deep research taught me that Twinkies were created in 1930 by James Alexander Dewar to solve a seasonal problem when he noticed shortcake pans sat unused when strawberries were out of season.
The name came from a shoe ad the inventor saw in St. Louis for “Twinkle Toe Shoes.” Inspiration can really come from anywhere, I suppose!
Sign up to get the BEST of Tom's Guide direct to your inbox. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Twinkies have played a part in pop culture from Zombieland to Ghostbusters when they were used as a metaphor for ghost energy levels.
Bill Clinton even put a Twinkie in the National Millennium Time Capsule, sealing it as a snack of historic significance. By the way, contrary to popular myths, Twinkies do not last forever. Gemini deep research taught me that, too.
Doritos
(Image credit: Future)
Gemini searched over 43 websites to dig up info on these delicious snacks that were born in Disneyland in the early 1960s.
They were originally called “Golden Dust Fries” and were a hit with park visitors.
Apparently, there is a flavor in the UK that tastes just like McDonald’s hamburgers! I’ll have to ask my colleagues across the pond about those.
Doritos went galactic in 2022, for a first-ever ad in space as part of a campaign benefiting St. Jude Children’s Research Hospital.
Fruit Roll-ups
(Image credit: Future)
The concept of fruit leather dates back centuries, and Gemini deep research tells me that the Fruit Roll-Up has surprising roots in Syrian immigrant George Shalhoub’s NYC grocery store.
His grandson Louis Shalhoub saw an opportunity in the 1960s to individually package and sell dried fruit sheets — laying the groundwork for the chewy, fruity snack we know today.
But the Fruit Roll-Ups that most of us know launched in 1983. I hadn’t ever thought about the non-stick backing of the snack, but Bob Zoss invented the wrapper that lets you easily peel away the candy. Without it, the Fruit Roll-Up would be a very stick disaster.
Cheez-It
(Image credit: Future)
After diving into a plethora of sites, I discovered that Cheez-Its were inspired by the popular dish Welsh rarebit and haven’t changed shape since 1921.
I learned that the same company that later created Cheez-Its supplied hardtack crackers to American soldiers during WWI.
Plus, I learned about lawsuits over whole grain claims, a fictional Cheez university for scientists, and that more than 400 million boxes are sold every year.
Beyond snacks
So why dig this deep into snack foods? While I definitely learned a lot and am very hungry now, the reason for this experiment was to test Gemini deep research. Users can take advantage of this feature for so much more than professional or academic reasons. The options are endless.
After completing the research, Gemini gives users the option to open a Google document with all the information. It automatically complies the information, so it is easy-to-read, study, or share. It also presents all the links so you can cite them in your research or take a look back at them yourself.
From news sites to scientific blogs, Gemini deep research takes search further than a typical Google search to get the most information on any given topic. The AI research assistant is capable of sifting through vast amounts of data to provide comprehensive reports on nearly any given subject.
Utilizing Gemini's deep research provided a clear and comprehensive understanding of the history and trivia of these popular snack foods. In just a few minutes I was able to get extensive information that would have taken me hours to do otherwise. And frankly, it's something I probably wouldn’t have ever done because of the time involved. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Reddit Vote Flip Share 0 Shares
In today’s digital era, the way we work is rapidly evolving, yet many challenges persist. Conventional AI assistants and manual workflows struggle to keep pace with the complexity and volume of modern tasks. Professionals and businesses face repetitive manual processes, inefficient research methods, and a lack of true automation. While traditional tools offer suggestions and basic automation, they fall short in transforming ideas into actionable results. The demand for a more capable, autonomous agent is evident—one that can seamlessly bridge the gap between human thought and operational execution, freeing users from mundane tasks and enabling them to focus on creativity and strategy.
Meet Manus – A New AI Agent with Deep Research + Operator + Computer Use + Lovable + Memory
Meet Manus: a super trending chineese AI agent designed to revolutionize productivity. Manus combines deep research capabilities with the autonomy to operate digital tools, making it much more than a conventional assistant. It is engineered to think deeply, execute complex tasks on your computer, and even maintain a personalized memory of your interactions. The agent is as engaging as it is effective, with an intuitive interface that invites users to delegate tasks confidently. Manus transforms research and operational planning into a streamlined process—whether it’s developing a comprehensive travel itinerary, analyzing intricate financial data, or generating insightful reports. With Manus, your ideas are not only understood but also turned into tangible actions.
Technical Details and Benefits
At its core, Manus harnesses advanced artificial intelligence that combines large language models with multi-modal processing and robust tool integration. This technology empowers Manus to autonomously perform a wide range of tasks, from data visualization and content generation to managing workflows and performing code operations. Its design includes an adaptive learning system that refines its responses based on user interactions, ensuring that the AI becomes more tailored and efficient over time. The ability to interact directly with web browsers, code editors, and database systems sets Manus apart from other AI assistants that simply offer advice. This convergence of cognitive depth and operational ability leads to enhanced productivity, reduced manual workloads, and more accurate decision-making processes.
Key Features of Manus AI
• Advanced browser control that effectively handles CAPTCHAs
• Capabilities for file creation and editing
• Ability to deploy complete websites directly from prompts
• Deep research with well-organized reports
Benchmarks
Examples
(1) Create Interactive Website based on Data Insights
(2) Stock Analysis
and many examples
Check out the Portal here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.
🚨 Meet Parlant: An LLM-first conversational AI framework designed to provide developers with the control and precision they need over their AI customer service agents, utilizing behavioral guidelines and runtime supervision. 🔧 🎛️ It’s operated using an easy-to-use CLI 📟 and native client SDKs in Python and TypeScript 📦. | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Welcome back to Week in Review. This week we’re looking at OpenAI potentially charging $20,000 a month for a specialized AI agent, the unexpected return of early-internet darling Digg, a company genetically engineering mice to have mammoth-like fur, and more! Let’s do this.
OpenAI could charge up to $20,000 per month for specialized AI “agents.” According to a report from The Information, OpenAI intends to launch several “agent” products tailored for different applications. One of the rumored agents, said to be priced at $20,000 a month, will be aimed at supporting “PhD-level research.” The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses.
Scale AI is being investigated by the U.S. Department of Labor for compliance with the Fair Labor Standards Act, which regulates unpaid wages, misclassification of employees as contractors, and illegal retaliation against workers. The investigation has been active since at least August 2024 and is still ongoing, according to a source familiar with the matter. A spokesperson for Scale AI told TechCrunch that the investigation was initiated during the previous presidential administration and that the startup felt its work was misunderstood by regulators then.
A federal judge denied Elon Musk’s motion for an injunction that would have halted OpenAI’s planned transition into a for-profit company, citing insufficient evidence. However, U.S. District Court Judge Yvonne Gonzalez Rogers said the court is prepared to hold an expedited trial based on the claim that OpenAI’s conversion plan is unlawful. It’s the latest turn in Musk’s lawsuit, which accuses OpenAI and CEO Sam Altman of abandoning its original nonprofit mission.
This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.
News
Image Credits:Julien Lasseur
Welcome back, Digg: One of the web’s early news aggregators is back under the ownership of its original founder Kevin Rose and Reddit co-founder Alexis Ohanian. Rose told TechCrunch that the revived Digg won’t be like “your old-school forums.” Read more
Google, look at my screen: Google unveiled a new Gemini feature called “Screenshare” at Mobile World Congress 2025 that will let users share what’s on their phone’s screen with the AI chatbot and ask questions about what it sees in real time. Read more
An “AI phone” for less than $1K: Deutsche Telekom announced that it is building an “AI Phone,” a low-cost handset created in close collaboration with Perplexity. DT plans to unveil the device in the second half of the year, and it will start selling it in 2026 for less than $1,000. Read more
It’sa me, artificial intelligence! UCSD research org Hao AI Lab threw AI models into a Super Mario Bros. emulator to benchmark performance. Anthropic’s Claude 3.7 performed the best, whereas OpenAI’s GPT-4o struggled. Read more
Volkswagen’s cheapest EV ever: Volkswagen this week revealed the ultra-cheap EV called the ID EVERY1. According to a source familiar with the new model, the small four-door hatchback will be the first to roll out with software and architecture from Rivian. Read more
Going ghost mode: Getting ghosted is never fun — especially if you’re a founder seeking capital from investors. TechCrunch spoke to several VCs about why they ghost and how founders can make a more meaningful impression. Read more
ChatGPT can directly edit your code: The newest version of the macOS ChatGPT app can directly edit code in supported developer tools. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users next week. Read more
A cool use case for AI: Wildlife researchers use camera traps to study animal populations, but it can take weeks to sift through all that data. Now Google has open sourced SpeciesNet, an AI model that can identify animal species by analyzing photos from camera traps. Read more
A new way to watch YouTube ad-free: YouTube Lite is a new subscription tier that lets users watch most videos ad-free for $7.99 per month. However, it won’t have Premium features like downloads, background play, or the ability to watch music videos ad-free. Read more
All hail the woolly mouse: Colossal Biosciences, which is trying to resurrect the woolly mammoth by 2028, has made an adorable inroad by genetically engineering mice to have mammoth-like fur. These are the cutest things I’ve ever seen. I cannot recommend watching the video enough. Read more
Analysis
Image Credits:Artur Widak/NurPhoto / Getty Images
Why is Signal such a hit in the Netherlands? Privacy-focused messaging app Signal has been flying high in the Dutch app stores this past month, often sitting at the top as the most downloaded free app on iOS and Android across all categories. While it’s difficult to pinpoint one specific reason as to why, Bits of Freedom senior policy adviser Rejo Zenger is not surprised. Recent developments in the U.S. have seen the big platform providers align with the new Trump administration, and Europe’s reliance on technology from huge private U.S. companies has become a focal point in that debate. Read more | 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.