🔗 RQ3: Mapping Data Issues to Code Defects

How do data defects cause code generation failures? We formalize a causal framework detailing 18 propagation mapping mechanisms.


Propagation Mechanisms

We identify two primary types of propagation:

  1. Direct Mappings (10 types): The classic “garbage in, garbage out” replication. The model explicitly memorizes dataset flaws (e.g., outdated APIs, security vulnerabilities) and replicates them in the output.
  2. Indirect Mappings (8 types): Insidious propagation. Non-code defects do not inject explicit errors but disrupt the model’s internal representations via mechanisms such as:
    • Entropy Collapse: Redundancy leading to deterministic but incorrect outputs.
    • Representation Bias: Skewed distributions causing the model to over-rely on common but suboptimal patterns.
    • Semantic Drift: Low-value text confusing the model’s understanding of code intent.

Visualizing the Propagation

Sankey Diagram of Mapping from Data Issues to Code Issues

Fig. 5. Mapping mechanisms from Training Data Issues to Generated Code Issues.


📄 Referenced Papers

LLMs Meet Library Evolution
LLMs Meet Library Evolution: Evaluating Deprecated API Usage in LLM-based Code Completion
2024-06 View Paper ↗
Less is More
Less is More: On the Importance of Data Quality for Unit Test Generation
2025-02 View Paper ↗
Qwen
Qwen Technical Report
2023-09 View Paper ↗
DataMan
DataMan: Data Manager for Pre-training Large Language Models
2025-02 View Paper ↗
Phi-4
Phi-4 Technical Report
2024-12 View Paper ↗
Copilot Security
Is GitHub’s Copilot as Bad as Humans at Introducing Vulnerabilities in Code?
2022-04 View Paper ↗
Copilot Evaluation
An Empirical Evaluation of GitHub Copilot’s Code Suggestions
2025-01 View Paper ↗
HalluCode
Exploring and Evaluating Hallucinations in LLM-Powered Code Generation
2024-04 View Paper ↗
CodeHalu
CodeHalu: Investigating Code Hallucinations in LLMs via Execution-based Verification
2024-05 View Paper ↗
SStuBs
Large Language Models and Simple, Stupid Bugs
2023-03 View Paper ↗
package hallucinations
We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs
2024-06 View Paper ↗
HallTrigger
Code Hallucination
2024-07 View Paper ↗
Large Language Models for Code
Large Language Models for Code: Security Hardening and Adversarial Testing
2023-02 View Paper ↗
Purple Llama CYBERSECEVAL
Purple Llama CYBERSECEVAL: A Secure Coding Benchmark for Language Models
2023-12 View Paper ↗
Lost at C
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants
2022-08 View Paper ↗
AI Assistants Security
Do Users Write More Insecure Code with AI Assistants?
2022-11 View Paper ↗
Bugs in LLM-generated Code
Bugs in Large Language Models Generated Code: An Empirical Stud
2024-03 View Paper ↗
GitHub Copilot, Amazon CodeWhisperer, ChatGPT
Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT
2023-04 View Paper ↗
ChatGPT Code Quality
No Need to Lift a Finger Anymore? Assessing the Quality of Code Generation by ChatGPT
2023-08 View Paper ↗
CloudAPIBench
On Mitigating Code LLM Hallucinations with API Documentation
2024-07 View Paper ↗
CodeMirage
CodeMirage: Hallucinations in Code Generated by Large Language Models
2024-08 View Paper ↗
Syntactic Robustness
Syntactic Robustness for LLM-based Code Generation
2024-04 View Paper ↗
NLPerturbator
NLPerturbator: Studying the Robustness of Code LLMs to Natural Language Variations
2024-06 View Paper ↗
AutoAPIEval
A Comprehensive Framework for Evaluating API-oriented Code Generation in Large Language Models
2024-09 View Paper ↗
DeSec
Decoding Secret Memorization in Code LLMs Through Token-Level Characterization
2024-10 View Paper ↗
When Fine-Tuning LLMs Meets Data Privacy
When Fine-Tuning LLMs Meets Data Privacy: An Empirical Study of Federated Learning in LLM-Based Program Repair
2024-12 View Paper ↗
Bias Unveiled
Bias Unveiled: Investigating Social Bias in LLM-Generated Code
2024-11 View Paper ↗
FairCoder
FairCoder: Evaluating Social Bias of LLMs in Code Generation
2025-01 View Paper ↗
CodeIP
CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code
2024-04 View Paper ↗
DeVAIC
DeVAIC: A Tool for Security Assessment of AI-generated Code
2024-04 View Paper ↗
Software Librarian
Is ChatGPT a Good Software Librarian? An Exploratory Study on the Use of ChatGPT for Software Library Recommendations
2024-08 View Paper ↗
Codequal Analyzer
Improving LLM-Generated Code Quality with GRPO
2025-06 View Paper ↗
Artificial-Intelligence Generated Code Considered Harmful
Artificial-Intelligence Generated Code Considered Harmful: A Road Map for Secure and High-Quality Code Generation
2024-09 View Paper ↗
Unveiling Inefficiencies in LLM-Generated Code
Unveiling Inefficiencies in LLM-Generated Code: Toward a Comprehensive Taxonomy
2025-03 View Paper ↗
Python Tests Quality
Quality Assessment of Python Tests Generated by Large Language Models
2025-06 View Paper ↗
CoQuIR
CoQuIR: A Comprehensive Benchmark for Code Quality-Aware Information Retrieval
2025-06 View Paper ↗
REAL
Training Language Models to Generate Quality Code with Program Analysis Feedback
2025-05 View Paper ↗
CIDRe
CIDRe: A Reference-Free Multi-Aspect Criterion for Code Comment Quality Measurement
2025-05 View Paper ↗
Infinite-Instruct
Infinite-Instruct: Synthesizing Scaling Code instruction Data with Bidirectional Synthesis and Static Verification
2025-05 View Paper ↗
Quality In, Quality Out
Quality In, Quality Out: Investigating Training Data's Role in AI Code Generation
2025-03 View Paper ↗
Security and Quality in LLM-Generated Code
Security and Quality in LLM-Generated Code: A Multi-Language, Multi-Model Analysis
2025-02 View Paper ↗
SwallowCode
Rewriting Pre-Training Data Boosts LLM Performance in Math and Code
2025-05 View Paper ↗
ROSE
ROSE: Transformer-Based Refactoring Recommendation for Architectural Smells
2025-07 View Paper ↗
Refining ChatGPT-Generated Code
Refining ChatGPT-Generated Code: Characterizing and Mitigating Code Quality Issues
2023-07 View Paper ↗
Qwen2.5
Qwen2.5 Technical Report
2024-12 View Paper ↗
ReCode
ReCode: Updating Code API Knowledge with Reinforcement Learning
2025-06 View Paper ↗
Data-efficient Fine-tuning
Data-efficient LLM Fine-tuning for Code Generation
2025-04 View Paper ↗
CRPE
CRPE: Expanding The Reasoning Capability of Large Language Model for Code Generation
2025-05 View Paper ↗
DeepSeek-Coder
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
2024-01 View Paper ↗
GPT-4
GPT-4 Technical Report
2023-03 View Paper ↗
Code Pretraining
How Does Code Pretraining Affect Language Model Task Performance?
2024-09 View Paper ↗
StarCoder 2 and The Stack v2
StarCoder 2 and The Stack v2: The Next Generation
2024-02 View Paper ↗
CodeSmellEval
How Propense Are Large Language Models at Producing Code Smells? A Benchmarking Study
2024-12 View Paper ↗
RPG
Rethinking Repetition Problems of LLMs in Code Generation
2025-05 View Paper ↗
Repetition In Repetition Out
Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective
2023-10 View Paper ↗
Code Data Training Stage
At Which Training Stage Does Code Data Help LLMs Reasoning?
2023-09 View Paper ↗
Brevity is the soul of wit
Brevity is the soul of wit: Pruning long files for code generation
2024-07 View Paper ↗
Benchmark Builders
Large Language Models are Qualified Benchmark Builders: Rebuilding Pre-Training Datasets for Advancing Code Intelligence Tasks
2025-04 View Paper ↗
Generated Code Diversity
Is Functional Correctness Enough to Evaluate Code Language Models? Exploring Diversity of Generated Codes
2024-08 View Paper ↗
CodeMI
Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach
2024-04 View Paper ↗
CodeCipher
CodeCipher: Learning to Obfuscate Source Code Against LLMs
2024-10 View Paper ↗
Code Pre-training Impact
To Code, or Not To Code? Exploring Impact of Code in Pre-training
2024-08 View Paper ↗
DataComp-LM
DataComp-LM: In search of the next generation of training sets for language models
2024-06 View Paper ↗
RedStone
RedStone: Curating General, Code, Math, and QA Data for Large Language Models
2024-12 View Paper ↗
Code Llama
Code Llama: Open Foundation Models for Code
2023-08 View Paper ↗
Codex
Evaluating Large Language Models Trained on Code
2021-07 View Paper ↗
Path Planning Evaluation
Assessing LLM code generation quality through path planning tasks
2025-04 View Paper ↗
CODEJUDGE
CODEJUDGE : Evaluating Code Generation with Large Language Models
2024-01 View Paper ↗
Datasets for Large Language Models
Datasets for Large Language Models: A Comprehensive Survey
2024-02 View Paper ↗
Synthetic Data Generation
Synthetic Data Generation Using Large Language Models: Advances in Text and Code
2025-01 View Paper ↗
Cracks in The Stack
Cracks in The Stack: Hidden Vulnerabilities and Licensing Risks in LLM Pre-Training Datasets
2025-05 View Paper ↗
Unseen Horizons
Unseen Horizons: Unveiling the Real Capability of LLM Code Generation Beyond the Familiar
2025-04 View Paper ↗
RTL-Breaker
RTL-Breaker: Assessing the Security of LLMs Against Backdoor Attacks on HDL Code Generation
2025-03 View Paper ↗
MG-Verilog
MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
2024-06 View Paper ↗
Code Generation Survey
A Survey on Large Language Models for Code Generation
2024-08 View Paper ↗
DataRecipe
DataRecipe --- How to Cook the Data for CodeLLM?
2024-10 View Paper ↗
Training Data Extraction
Understanding Privacy Risks of Large Language Models in Japanese Based on Training Data Extraction Attacks
2025-08 View Paper ↗
aiXcoder-7B
aiXcoder-7B: A Lightweight and Effective Large Language Model for Code Processing
2025-04 View Paper ↗
Imperfect Code Generation
Imperfect Code Generation: Uncovering Weaknesses in Automatic Code Generation by Large Language Models
2024-05 View Paper ↗
Inter-Dataset Code Duplication
On Inter-Dataset Code Duplication and Data Leakage in Large Language Models
2025-01 View Paper ↗
LLM-ProS
LLM-ProS: Analyzing Large Language Models’ Performance in Competitive Problem Solving
2025-05 View Paper ↗
UCD-Training
Unseen-Codebases-Domain Data Synthesis and Training Based on Code Graphs
2026-02 View Paper ↗
ShortCoder
ShortCoder: Knowledge-Augmented Syntax Optimization for Token-Efficient Code GenerationPreprint
2026-01 View Paper ↗
Beyond Functional Correctness
Beyond functional correctness: Investigating coding style inconsistencies in large language models
2024-06 View Paper ↗
RustEvo^ 2
RustEvo^ 2: An Evolving Benchmark for API Evolution in LLM-based Rust Code Generation
2025-03 View Paper ↗

© 2026 SYSUSELab. Systematic Review of Quality Issues in LLMs for Code.

This site uses Just the Docs, a documentation theme for Jekyll.