
Unlocking Structured Commonsense Reasoning with Code-LLMs
23 Apr 2025
COCOGEN pioneers the use of Code-LLMs for structured commonsense generation, opening new avenues for NLP tasks that require structured prediction and reasoning.

Using Code-LLMs for Symbolic and Structured Reasoning
23 Apr 2025
COCOGEN uses CodeLLMs for structured commonsense generation, going beyond traditional symbolic reasoning tasks by translating them into Python code.

Structured LLM Prompts Drive Better Results with COCOGEN
23 Apr 2025
COCOGEN’s performance gains stem from combining structured code prompts with CodeLLMs—outperforming text-based models even under dynamic or duplicate inputs.

COCOGEN Sets Few-Shot Benchmark in Entity and Argument Graph Tasks
22 Apr 2025
COCOGEN delivers top-tier results in entity state tracking and argument graph generation—surpassing fine-tuned models with just a few Python-coded examples.

Study Shows Few-Shot Code Generation Outperforms Fine-Tuned Models
22 Apr 2025
COCOGEN beats GPT-3 and fine-tuned models in structured commonsense tasks with just 15 Python-based examples. Efficient, powerful, and precise.

Why Converting Graphs to Python Code Improves AI Reasoning
22 Apr 2025
COCOGEN converts commonsense graphs into Python code, helping CodeLLMs outperform traditional models in structured reasoning tasks.

AI Understands Commonsense Reasoning Better When It Thinks Like a Programmer
22 Apr 2025
Code-based language models outperform traditional LLMs in structured commonsense reasoning by converting graph tasks into code.

Goodbye, Compute-Hungry Models—This Tiny AI Is the Future of Prediction
21 Feb 2025
Researchers have developed a practical, efficient alternative to massive AI models for time series forecasting.

Big AI Models Are Struggling—This Tiny One Wins With 45% More Accuracy
21 Feb 2025
Researchers have developed a practical, efficient alternative to massive AI models for time series forecasting.