1 min readfrom Machine Learning

[P] Using YouTube as a data source (lessons from building a coffee domain dataset)

[P] Using YouTube as a data source (lessons from building a coffee domain dataset)
[P] Using YouTube as a data source (lessons from building a coffee domain dataset)

I started working on a small coffee coaching app recently - something that could answer questions around brew methods, grind size, extraction, etc.

I was looking for good data and realized most written sources are either shallow or scattered. YouTube, on the other hand, has insanely high-quality content (James Hoffmann, Lance Hedrick, etc.), but it’s not usable out of the box for RAG.

Transcripts are messy, chunking is inconsistent, getting everything into a usable format took way more effort than expected.

So I made a small CLI tool that:

  • pulls videos from a channel
  • extracts transcripts
  • cleans + chunks them into something usable for embeddings

https://preview.redd.it/wagqqzpos6sg1.png?width=640&format=png&auto=webp&s=e18e13760188c39c2f64b4c19738fcdcec1c5435

It basically became the data layer for my app, and funnily ended up getting way more traction than my actual coffee coaching app!

Repo: youtube-rag-scraper

submitted by /u/ravann4
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#generative AI for data analysis
#Excel alternatives for data analysis
#natural language processing for spreadsheets
#big data management in spreadsheets
#conversational data analysis
#real-time data collaboration
#intelligent data visualization
#data visualization tools
#enterprise data management
#big data performance
#data analysis tools
#data cleaning solutions
#rows.com
#large dataset processing
#YouTube
#transcripts
#coffee coaching app
#data layer
#chunking
#data source