
Neurips 2023
This paper:
Explores work done by researchers since the original scaling laws paper and will revisit evidence of scaling laws’ veracity
Suggests that in cases of significantly large data sets that come from low quality data sources, scaling laws no longer hold true and instead the law of diminishing returns takes its place
Explores empirical investigations that show where scaling laws work and where they do not
Suggests ways we can consider scaling laws as a component of many predictors that can contribute to model performance while ensuring data quality is at the center of the power law.


The Laws of (Generative) AI
My presentation on the policy regulation limitations for Generative AI.

Knowing Your Customer through Analytics and ML
The power of Machine Learning to increase Product Marketing

Mitigating Bias in Production
2022 NeurIPs Global South in AI Keynote Speaker presentation

How Nvidia’s Megatron is Boosting Transformer Performance
Deep Learning Presentation for Deci AI - 2022
Presentation of the findings from “Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism” paper