We’ve taken over 200k machine learning research papers and clustered them for your interpretation & research
In recent years, the number of research papers have grown tremendously. New areas are popping up everyday but it is not exactly clear which areas are emerging or which interesting new area has just surfaced up. As a result, I decided to cluster together 200k+ interesting machine learning papers that were recently surfaced up.
Technical Write up
I created the vectors using a fine-tuned version of Sentence Transformer’s roberta-base model.
There were a few things I had:
- The training had to be unsupervised because no one would have any idea what was in the dataset
- An NLP embeddings-based approach with unsupervised clustering would be the simplest way to surface insights
- Federated Learning was new to me 👍
- Graph GANs are really interesting 😃
- Representation Learning seems to be a lot broader than I expected 👍
In order to get some form of off-the-shelf domain adaptation, I used off-the-shelf BART for unsupervised query generation and then fine-tuned my roberta embeddings using multiple negative rankings loss. This seemed to work quite well as the topics seemed to have separated out quite nicely in my embeddings projector.
I then trained my model on the title and abstract of the research papers so that the model could better understand some of the data.
Afterwards, I encoded the titles and clustered them using a simple K Means algorithm.
The dataset curation process was fairly straightforward. I used the arxiv API and scraped 200k papers off the query “machine learning” sometime late 2020.