🔗 RQ3: Mapping Data Issues to Code Defects
How do data defects cause code generation failures? We formalize a causal framework detailing 18 propagation mapping mechanisms.
Propagation Mechanisms
We identify two primary types of propagation:
- Direct Mappings (10 types): The classic “garbage in, garbage out” replication. The model explicitly memorizes dataset flaws (e.g., outdated APIs, security vulnerabilities) and replicates them in the output.
- Indirect Mappings (8 types): Insidious propagation. Non-code defects do not inject explicit errors but disrupt the model’s internal representations via mechanisms such as:
- Entropy Collapse: Redundancy leading to deterministic but incorrect outputs.
- Representation Bias: Skewed distributions causing the model to over-rely on common but suboptimal patterns.
- Semantic Drift: Low-value text confusing the model’s understanding of code intent.
Visualizing the Propagation
Fig. 5. Mapping mechanisms from Training Data Issues to Generated Code Issues.
📄 Referenced Papers
LLMs Meet Library Evolution
LLMs Meet Library Evolution: Evaluating Deprecated API Usage in LLM-based Code Completion
Less is More
Less is More: On the Importance of Data Quality for Unit Test Generation
Qwen
Qwen Technical Report
DataMan
DataMan: Data Manager for Pre-training Large Language Models
Phi-4
Phi-4 Technical Report
Copilot Security
Is GitHub’s Copilot as Bad as Humans at Introducing Vulnerabilities in Code?
Copilot Evaluation
An Empirical Evaluation of GitHub Copilot’s Code Suggestions
HalluCode
Exploring and Evaluating Hallucinations in LLM-Powered Code Generation
CodeHalu
CodeHalu: Investigating Code Hallucinations in LLMs via Execution-based Verification
SStuBs
Large Language Models and Simple, Stupid Bugs
package hallucinations
We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs
HallTrigger
Code Hallucination
Large Language Models for Code
Large Language Models for Code: Security Hardening and Adversarial Testing
Purple Llama CYBERSECEVAL
Purple Llama CYBERSECEVAL: A Secure Coding Benchmark for Language Models
Lost at C
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants
AI Assistants Security
Do Users Write More Insecure Code with AI Assistants?
Bugs in LLM-generated Code
Bugs in Large Language Models Generated Code: An Empirical Stud
GitHub Copilot, Amazon CodeWhisperer, ChatGPT
Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT
ChatGPT Code Quality
No Need to Lift a Finger Anymore? Assessing the Quality of Code Generation by ChatGPT
CloudAPIBench
On Mitigating Code LLM Hallucinations with API Documentation
CodeMirage
CodeMirage: Hallucinations in Code Generated by Large Language Models
Syntactic Robustness
Syntactic Robustness for LLM-based Code Generation
NLPerturbator
NLPerturbator: Studying the Robustness of Code LLMs to Natural Language Variations
AutoAPIEval
A Comprehensive Framework for Evaluating API-oriented Code Generation in Large Language Models
DeSec
Decoding Secret Memorization in Code LLMs Through Token-Level Characterization
When Fine-Tuning LLMs Meets Data Privacy
When Fine-Tuning LLMs Meets Data Privacy: An Empirical Study of Federated Learning in LLM-Based Program Repair
Bias Unveiled
Bias Unveiled: Investigating Social Bias in LLM-Generated Code
FairCoder
FairCoder: Evaluating Social Bias of LLMs in Code Generation
CodeIP
CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code
DeVAIC
DeVAIC: A Tool for Security Assessment of AI-generated Code
Software Librarian
Is ChatGPT a Good Software Librarian? An Exploratory Study on the Use of ChatGPT for Software Library Recommendations
Codequal Analyzer
Improving LLM-Generated Code Quality with GRPO
Artificial-Intelligence Generated Code Considered Harmful
Artificial-Intelligence Generated Code Considered Harmful: A Road Map for Secure and High-Quality Code Generation
Unveiling Inefficiencies in LLM-Generated Code
Unveiling Inefficiencies in LLM-Generated Code: Toward a Comprehensive Taxonomy
Python Tests Quality
Quality Assessment of Python Tests Generated by Large Language Models
CoQuIR
CoQuIR: A Comprehensive Benchmark for Code Quality-Aware Information Retrieval
REAL
Training Language Models to Generate Quality Code with Program Analysis Feedback
CIDRe
CIDRe: A Reference-Free Multi-Aspect Criterion for Code Comment Quality Measurement
Infinite-Instruct
Infinite-Instruct: Synthesizing Scaling Code instruction Data with Bidirectional Synthesis and Static Verification
Quality In, Quality Out
Quality In, Quality Out: Investigating Training Data's Role in AI Code Generation
Security and Quality in LLM-Generated Code
Security and Quality in LLM-Generated Code: A Multi-Language, Multi-Model Analysis
SwallowCode
Rewriting Pre-Training Data Boosts LLM Performance in Math and Code
ROSE
ROSE: Transformer-Based Refactoring Recommendation for Architectural Smells
Refining ChatGPT-Generated Code
Refining ChatGPT-Generated Code: Characterizing and Mitigating Code Quality Issues
Qwen2.5
Qwen2.5 Technical Report
ReCode
ReCode: Updating Code API Knowledge with Reinforcement Learning
Data-efficient Fine-tuning
Data-efficient LLM Fine-tuning for Code Generation
CRPE
CRPE: Expanding The Reasoning Capability of Large Language Model for Code Generation
DeepSeek-Coder
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
GPT-4
GPT-4 Technical Report
Code Pretraining
How Does Code Pretraining Affect Language Model Task Performance?
StarCoder 2 and The Stack v2
StarCoder 2 and The Stack v2: The Next Generation
CodeSmellEval
How Propense Are Large Language Models at Producing Code Smells? A Benchmarking Study
RPG
Rethinking Repetition Problems of LLMs in Code Generation
Repetition In Repetition Out
Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective
Code Data Training Stage
At Which Training Stage Does Code Data Help LLMs Reasoning?
Brevity is the soul of wit
Brevity is the soul of wit: Pruning long files for code generation
Benchmark Builders
Large Language Models are Qualified Benchmark Builders: Rebuilding Pre-Training Datasets for Advancing Code Intelligence Tasks
Generated Code Diversity
Is Functional Correctness Enough to Evaluate Code Language Models? Exploring Diversity of Generated Codes
CodeMI
Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach
CodeCipher
CodeCipher: Learning to Obfuscate Source Code Against LLMs
Code Pre-training Impact
To Code, or Not To Code? Exploring Impact of Code in Pre-training
DataComp-LM
DataComp-LM: In search of the next generation of training sets for language models
RedStone
RedStone: Curating General, Code, Math, and QA Data for Large Language Models
Code Llama
Code Llama: Open Foundation Models for Code
Codex
Evaluating Large Language Models Trained on Code
Path Planning Evaluation
Assessing LLM code generation quality through path planning tasks
CODEJUDGE
CODEJUDGE : Evaluating Code Generation with Large Language Models
Datasets for Large Language Models
Datasets for Large Language Models: A Comprehensive Survey
Synthetic Data Generation
Synthetic Data Generation Using Large Language Models: Advances in Text and Code
Cracks in The Stack
Cracks in The Stack: Hidden Vulnerabilities and Licensing Risks in LLM Pre-Training Datasets
Unseen Horizons
Unseen Horizons: Unveiling the Real Capability of LLM Code Generation Beyond the Familiar
RTL-Breaker
RTL-Breaker: Assessing the Security of LLMs Against Backdoor Attacks on HDL Code Generation
MG-Verilog
MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
Code Generation Survey
A Survey on Large Language Models for Code Generation
DataRecipe
DataRecipe --- How to Cook the Data for CodeLLM?
Training Data Extraction
Understanding Privacy Risks of Large Language Models in Japanese Based on Training Data Extraction Attacks
aiXcoder-7B
aiXcoder-7B: A Lightweight and Effective Large Language Model for Code Processing
Imperfect Code Generation
Imperfect Code Generation: Uncovering Weaknesses in Automatic Code Generation by Large Language Models
Inter-Dataset Code Duplication
On Inter-Dataset Code Duplication and Data Leakage in Large Language Models
LLM-ProS
LLM-ProS: Analyzing Large Language Models’ Performance in Competitive Problem Solving
UCD-Training
Unseen-Codebases-Domain Data Synthesis and Training Based on Code Graphs
ShortCoder
ShortCoder: Knowledge-Augmented Syntax Optimization for Token-Efficient Code GenerationPreprint
Beyond Functional Correctness
Beyond functional correctness: Investigating coding style inconsistencies in large language models
RustEvo^ 2
RustEvo^ 2: An Evolving Benchmark for API Evolution in LLM-based Rust Code Generation