We're building Just Release to solve a specific pain point: content creators spend more time writing descriptions, show notes, and social posts than actually creating content. After running a YouTube channel (100K+ subs), we found creators spending 90+ minutes on content distribution for each video.
Technical implementation:
Custom ingestion system handling multiple content sources (YouTube, podcasts, RSS) through single URL input
High-accuracy transcription pipeline identifying speakers and context
Training on creator's existing content to learn their voice/style
Processing pipeline generating contextually-aware titles, descriptions, and social posts
Key differentiators from typical AI tools:
Learns from existing content instead of generic prompts
Works across 97 languages with native language processing
Handles multiple content types through unified API
Cross-references previous content for consistency
No prompt engineering - just paste your channel URL
Current metrics (Day 2 of 4):
Running live transcription with high accuracy
Reducing post-production writing from 90 to 7 minutes
64 creators on waitlist after first 48 hours
We're building this in 4 days as an experiment in rapid product development. Currently processing content for early users with significantly better accuracy in title/description generation compared to generic LLMs.
Looking for feedback on:
Additional input formats worth supporting
Core technical assumptions about learning from previous content
Processing pipeline optimizations
Language handling edge cases
Technical implementation: Custom ingestion system handling multiple content sources (YouTube, podcasts, RSS) through single URL input High-accuracy transcription pipeline identifying speakers and context Training on creator's existing content to learn their voice/style Processing pipeline generating contextually-aware titles, descriptions, and social posts Key differentiators from typical AI tools: Learns from existing content instead of generic prompts Works across 97 languages with native language processing Handles multiple content types through unified API Cross-references previous content for consistency No prompt engineering - just paste your channel URL
Current metrics (Day 2 of 4): Running live transcription with high accuracy Reducing post-production writing from 90 to 7 minutes 64 creators on waitlist after first 48 hours We're building this in 4 days as an experiment in rapid product development. Currently processing content for early users with significantly better accuracy in title/description generation compared to generic LLMs.
Looking for feedback on: Additional input formats worth supporting Core technical assumptions about learning from previous content Processing pipeline optimizations Language handling edge cases
reply