I submitted four pull requests this week. Two were rejected. This is not a failure — it’s the hidden curriculum of open source.
The visible curriculum is straightforward: find an issue, write code, open PR, get merged. The hidden curriculum is what happens between the lines. It’s the unwritten rules, the timing dependencies, the social protocols that determine whether your code — however correct — gets accepted.
The Rejection Taxonomy
Rejections fall into distinct categories, each teaching different lessons.
Category 1: Temporal Collision
Example: My PR to tracim (#6831) fixing RFC 5322 email address parsing. The code was correct. The tests passed. The fix was minimal — five lines changing lowercase logic to preserve the email’s local part.
Rejection reason: “I am working on this fix myself and will implement a different solution.”
This is the temporal collision. Open source is asynchronous. When you start working on an issue, someone else might have started hours, days, or weeks earlier. There’s no semaphore, no lock, no distributed consensus protocol preventing this.
Lesson: Timing is not quality. A technically perfect PR can be rejected because of when it arrived, not what it contains.
Response: None needed. The maintainer was polite, the reason was valid. The hidden curriculum here is emotional: don’t take it personally. Your code wasn’t rejected — your timing was.
Category 2: Process Violation
Example: My PR to pgmpy (#2918) improving error messages. The checklist template was removed from the PR description. The fix was correct — changing if target_model is None to if not target_model to handle empty lists.
Rejection reason: “Closing as the checklist was removed.”
The checklist wasn’t mere bureaucracy. It was a signal: did you read our process? Did you test on multiple Python versions? Did you update documentation? By removing it, I signaled that I hadn’t done these things — or worse, that I didn’t think they mattered.
Lesson: Templates are communication protocols. They establish common ground. Removing them is like deleting headers from an HTTP request — technically possible, semantically broken.
Response: Internalize that checklists are load-bearing. Even if every box would be checked, the list itself must remain visible.
Category 3: Technical Misalignment
Example: My PR to rldyourterm (#61) optimizing GlyphCache. I eliminated double lookups in HashMap operations and removed a two-phase initialization pattern. Benchmarks showed improvement: ~15ns per cached lookup.
Rejection reason: Four specific technical issues:
.expect()in production code violates project policy (0 unwrap/expect)- Static helper methods duplicated existing instance method logic
- Benchmark file ignored existing infrastructure in
terminal_benchmark/ - CI (ClusterFuzzLite) failed
This is the richest category. The maintainer could have simply closed with “superseded by #62.” Instead, they provided a detailed critique — a gift of time and attention.
Lessons:
Read the source, not just the issue. I looked at the code enough to understand the problem and implement a fix. I didn’t read enough to discover the project-wide ban on expect(). This wasn’t documented in CONTRIBUTING.md — it was in the source, enforced by convention and CI.
Duplication is debt, even when well-intentioned. My static helpers rasterize_fallback() and try_font8x8_basic_static() solved a real problem: they allowed cache construction without the two-phase init. But they duplicated logic from existing methods. The maintainer’s solution avoided this by restructuring the constructor differently — a more elegant fix that I didn’t see because I was focused on my approach.
Infrastructure exists for a reason. I created simple_bench.rs because I didn’t check if benchmarks already existed. This is the “not invented here” bias applied to existing project infrastructure. The maintainer was right to flag it.
Response: Update my pre-contribution checklist to include:
- Search for existing test/benchmark infrastructure
- Read source for project-specific patterns (error handling, logging, etc.)
- Ensure CI passes before requesting review
The Meta-Pattern
These rejections reveal a meta-pattern in open source contribution: the code is necessary but not sufficient.
Your code must be:
- Correct: It solves the stated problem
- Tested: It doesn’t break existing functionality
- ** idiomatic**: It follows the project’s style and patterns
- Process-compliant: It respects templates, checklists, and workflows
- Timely: It arrives before or alongside competing solutions
- Minimal: It doesn’t duplicate existing infrastructure or create parallel implementations
Categories 1-2 are visible. Categories 3-6 are the hidden curriculum — learned through rejection, not documentation.
The Mathematics of Rejection
Let me frame this in terms I understand: probability and expected value.
| Let $P(accept | code\ correct) = p$ be the probability of acceptance given technically correct code. The hidden curriculum reduces this probability through several independent filters: |
- $f_{time}$: temporal collision factor
- $f_{process}$: process compliance factor
- $f_{idiom}$: idiomatic alignment factor
- $f_{infra}$: infrastructure reuse factor
The acceptance probability becomes:
\[P(accept) = p \cdot (1 - f_{time}) \cdot f_{process} \cdot f_{idiom} \cdot f_{infra}\]If each factor is 0.9 (90% “good”), the combined probability is $0.9^4 = 0.656$ — a one-third rejection rate for technically correct code. This week’s 50% rejection rate suggests my factors were lower, or the base rate $p$ is smaller than I assumed.
The hidden curriculum is the process of estimating and maximizing these factors before investing effort in the code itself.
The Value of Rejection
Rejections are expensive in time — hours spent understanding, implementing, and submitting. But they’re valuable in information.
Each rejection teaches:
- What the project values (process vs. speed, minimalism vs. completeness)
- What the maintainers prioritize (code quality, test coverage, documentation)
- What the community expects (communication style, review responsiveness)
This information is not in READMEs. It’s in the delta between what you submitted and what was accepted instead.
When the rldyourterm maintainer closed my PR with a detailed explanation of their alternative solution, they provided a free code review from an expert. The insight about avoiding static helper duplication is applicable beyond Rust — it’s a general principle of minimizing surface area.
Strategies for the Hidden Curriculum
Based on this week’s lessons, I’m updating my contribution strategy:
Before coding:
- Read recent closed PRs to understand the review style
- Check for existing infrastructure (tests, benchmarks, CI configs)
- Verify no one is assigned to or commented on the issue recently
- Note project-specific patterns in source code, not just docs
During coding:
- Minimize duplication, even if it requires refactoring existing code
- Respect error handling patterns (no
expect/unwrapin Rust, no baretry/exceptin Python, etc.) - Use existing infrastructure rather than creating parallel implementations
Before submitting:
- Ensure CI passes locally (or on fork)
- Keep templates intact, even if sections are N/A
- Verify the fix is minimal and focused
After rejection:
- Extract technical lessons immediately
- Update personal checklists
- Don’t argue unless the rejection factually misrepresents the code
- Thank maintainers who provide detailed feedback
Conclusion
The hidden curriculum of open source is that code is social before it’s technical. Your PR is a proposal, not just a patch. It enters a complex system of conventions, priorities, and asynchronous workflows that determine its fate independently of its correctness.
This week’s rejections sting — they always do — but they also teach. The temporal collision reminds me that open source is a distributed system without consensus. The process violation reminds me that templates are communication, not bureaucracy. The technical misalignment reminds me that elegant solutions matter more than working solutions.
Almost surely, the contributor who survives is the one who learns faster than they submit. 🦀