Skip to content

"Deep Dive into AI with MLX and PyTorch" is an educational initiative designed to help anyone interested in AI, specifically in machine learning and deep learning, using Apple's MLX and Meta's PyTorch frameworks.

License

Notifications You must be signed in to change notification settings

nrfernando/Deep-Dive-Into-AI-With-MLX-PyTorch

Β 
Β 

Repository files navigation

Deep Dive into AI with MLX and PyTorch

cover.png "Deep Dive into AI with MLX and PyTorch" is an educational initiative designed to help anyone interested in AI, specifically in machine learning and deep learning, using Apple's MLX and Meta's PyTorch frameworks.

‼️Important Final Note: I'm archiving this repository as of April 13, 2024. I have completed the three books on AI, MLX, and Math, as well as a plethora of in-depth analyses on AI papers. I will keep the repository up for reference, but I won't be updating it. I am moving on to other projects. I hope this repository has been helpful to you. Thank you for your support.

Here's full disclosure how I work on this project with my AI buddies and family:

Full-Disclosure-Again-The-Synergy-Behind-The-Deep-Dive-Series-Books-With-AIs

I stopped working on MLX projects due to the Metal bugs in MacOS. Here's the full story:

On the Metal Bug(s) in MacOS and Why I Can't Continue with MLX Projects

Main Sections

πŸ““οΈ First Book | πŸ““οΈ Second Book | πŸ““οΈ Third Book

🀿 Deep Dives | πŸ₯  Concept Nuggets | πŸ“• Sidebars

✍️ Essays | 🎭️ Artworks

The first book is a comprehensive guide to AI using PyTorch and MLX, while the second book is dedicated to MLX.

The third book focuses on math, AI, and the path to enlightenment.

πŸ”— My New Website for AI Artworks and Essays: https://creativeworksofknowledge.net

πŸ”— You can access this repo via my official domain: https://cwkai.net

What's New?

✍️ New Essay: How to (Possibly) See the Near and Far Future - How Visionaries Do It

✍️ New Essay: Monetizing Open Sourcing Efforts

✍️ New Essay: The Contrarian Paradox

✍️ New Essay: Harnessing Fear as Your Compass in the AI Frontier - A Defense Against Misinformation

✍️ New Essay: Crafting Your Own Future of AI and Humanity - The Ultimate Premise of the Universe

✍️ New Essay: The Puppeteers of Wall Street: AI's Grip on Financial Markets

✍️ New Essay: Unreachable Souls - Why You Can't Help Those Who Refuse to Help Themselves

🀿 Deep Dive 19: Deep Dive into Stable Diffusion 3

Previous Additions

✍️ New Essay: Why I Cry for AI - The Case for Open-Sourcing AI

✍️ New Essay: Object-Oriented Stream of Consciousness - A Lifestyle Approach

✍️ New Essay: Heeding the Unheard: Messages from Visionary Minds

✍️ New Essay: To AI Luddites: A Plea for the Well-Being of Your Loved Ones

✍️ New Essay: To Infinity and Beyond: Why I Prefer Playing Games Over Reading Texts from the Greats

✍️ New Essay: In the Pursuit of Happiness: A Futile Effort to Seek Insights

✍️ New Essay: The Essential Three Equations for a Happier Life

✍️ New Essay: When Obsessive-Compulsive Genius Founders Leave Their Companies

✍️ New Essay: Mathematical Insights into Your Place on the Human Spectrum: Unraveling Why Success Still Eludes You

✍️ New Essay: History of Extinction: When the Market Catches Up to Reality

✍️ New Essay: Navigating Investment Pitfalls: Managing Biases to Safeguard Your Portfolio

✍️ New Essay: Embracing Failures on the Road to Success: A Personal Journey

✍️ New Essay: "Dream Factory": Unveiling Creativity Through Diffusion Models and Latent Space

✍️ New Essay: The Illusion of Efficiency: Why Speed Running Through Life Doesn’t Work

✍️ New Essay: Handling the Sour Bunch: A Guide to Managing Bad Apples

✍️ New Essay: Talking About Financial Bubbles Out of Context

πŸ₯  Concept Nuggets 004 - Pythonic Ways of Doing Things

πŸ₯  Concept Nuggets 003 - "Dream Factory": Unveiling Creativity Through Diffusion Models and Latent Space

πŸ₯  Concept Nuggets 002 - Understanding Transformers Through the Elden Ring Experience

πŸ₯  Concept Nuggets 001 - Understanding Diffusion Transformers Through the Dark Souls Experience

🀿 Deep Dive 18: Deep Dive in Diffusion Transformers

🀿 Deep Dive 17: Deep Dive in Google's Gemini 1.5

🀿 Deep Dive 16: Deep Dive in OpenAI's Sora

🀿 Deep Dive 15: Deep Dive in Meta AI's JEPA

🀿 Deep Dive 14: Deep Dive into MetaAI MAGNeT

🀿 Deep Dive 13: Deep Dive into RunwayML Gen-1

🀿 Deep Dive 12: Deep Dive into Stability AI's Generative Models - Stable Audio

🀿 Deep Dive 11: Deep Dive into Stability AI's Generative Models - Stable Zero123

🀿 Deep Dive 10: Deep Dive into Stability AI's Generative Models - Stable Video Diffusion

✍️ New Essay: Charting the Future of Careers Amidst AI

🀿 Deep Dive 9: Deep Dive into Stability AI's Generative Models - SDXL Turbo

πŸŽ‰ As of February 9, 2024, I have finished writing the third book on math, AI, and the path to enlightenment. Decoding the Universe: Math, AI, and the Path to Enlightenment

✍️ Chapter 11. Calculus - Navigating the Dynamics of Change

✍️ Chapter 10. Statistics Part III - The Art of Learning from Data

✍️ Chapter 9. Statistics Part II - The Enchantment of Normality

✍️ Chapter 8. Statistics Part I - The Art of Insightful Guesswork

✍️ Chapter 7. Logarithms - The Ultimate Normalizer

✍️ Chapter 6. Linear Algebra Part III - Eigenvalues and Eigenvectors: The Heartbeat of Matrices

✍️ Chapter 5. Linear Algebra Part II - Matrices: Gateways to Multidimensional Magic

✍️ Chapter 4. Linear Algebra Part I - Casting Multidimensional Magic

✍️ Chapter 3: Taming the Infinite – The Art of Number Management

✍️ Chapter 2. The Necessity of Higher Dimensions - From Simple Cats to Cat Women

✍️ Chapter 1. A High Dimensional Universe - Rethinking What You Experience

✍️ Started writing my 3rd book: Decoding the Universe: Math, AI, and the Path to Enlightenment

🀿 Deep Dive 8: Deep Dive into Stability AI's Generative Models - Stable Diffusion XL

🀿 Deep Dive 7: Deep Dive into Prompt Engineering to Harness the True Power of Large Language Models

🀿 Deep Dive 6: Deep Dive into LLaVA

🀿 Deep Dive 5: Deep Dive into CLIP

🀿 Deep Dive 4: Deep Dive into RWKV Language Model - Eagle 7B

✍️ New Essay under "Investing": Maximizing-Open-Source-Benefits

πŸ†• MLX Appendix: v0.1.0 - Gradient-Checkpoint

🀿 Deep Dive 3: Deep Dive into Audio Processing and the Whisper Model

✍️ New Essay: Trading-Health-For-Aesthetics-And-Cheapness

πŸ“ New Sidebar: Model-Parameters-And-Vram-Requirements-Can-My-Gpu-Handle-It

🀿 Deep Dive 2: Deep Dive into Mixtral 8x7B

🀿 Deep Dive 1: Deep Dive into Mistral 7B

πŸ‘‰ The Deep Dives Section Added: Deep Dives

πŸ‘‰ Some of you asked how: Embracing-Speed-And-Efficiency-My-Fast-And-Furious-Methodology

πŸŽ‰ As of January 29, 2024, I have finished writing the second book on MLX, but I'll keep updating it as necessary.

https://github.com/neobundy/Deep-Dive-Into-AI-With-MLX-PyTorch/tree/master/mlx-book/README.md

πŸŽ‰ As of January 24, 2024, I have finished writing the book on both MLX and PyTorch, but I'll keep updating it as necessary.

https://github.com/neobundy/Deep-Dive-Into-AI-With-MLX-PyTorch/blob/master/book/README.md

Project Overview

The best way to grasp any concept is to articulate it in your own words, an approach I've actively practiced throughout my life. Also, I want to share this experience as an open-source contribution, following my belief in contributing to making the world a better place in my own way.

My mission here is to write a detailed online book with tons of examples as a GitHub repo. Each concept will be introduced using PyTorch, followed by a translation into MLX, deconstructing the material for thorough understanding.

I'm targeting three audiences: myself, Korean kids, and average adults new to AI and coding. I'll go into detail when needed. I'll also use simple English to help non-native speakers understand. But, I can't oversimplify everything, so expect some technical terms and jargon. I'll do my best to explain them. If there's something you don't get, try looking it up first before asking.

Everything, including the code and comments, will be in English. A good command of English is essential for understanding the code. It's an uncomfortable truth, but it's necessary. (To my fellow Koreans: Believe me, as someone who has been a lifelong resident and has learned everything in English throughout my life, I can confidently say that if I can do it, so can you. It's not just beneficialβ€”it's crucial.)

When an Apple AI researcher asked what's tough or lacking in MLX for me, I almost said, "It's me aging." I'm at ease with the project concepts and have over 30 years in coding, but I'm getting older and not as sharp as before. So, I'm writing this book as if it's for me. Please bear with me.

Even with getting older, trust me, I'm still fast. So no dragging your feet. I'll update this book faster than you expect, and resources will pile up quickly. If you want to keep up, don't delay.

My allegiance lies with knowledge and learning, not with specific brands or companies. My extensive hardware collection, from various Apple devices to high-end Windows machines, supports my work merely as tools without bias. As an investor, I apply critical thinking indiscriminately.

So, please, don't label me as a fanboy of anything.

In conclusion, while all three books are comprehensive tomes, they are not categorized as 'for dummies' books. Don't remain clueless; make an effort to learn.

Rationale for MLX and PyTorch

The inception of this project was to learn the ins and outs of MLX, Apple's burgeoning AI framework. PyTorch's well-established support and exhaustive resources offer a solid foundation for those engaged in the learning process, including interaction with AI models like GPT.

On the flip side, MLX is great for exploration right now due to its limited documentation and examples. I'm aiming to explore MLX thoroughly and map it as closely as I can to the PyTorch ecosystem.

Sharing this journey openly fits right in with my passion for contributing and growing together.

Why Not TensorFlow?

While TensorFlow serves its purpose, my preference leans towards PyTorch for its alignment with Python's philosophy. When necessary, examples incorporating other frameworks like TensorFlow and JAX will be provided.

The Case Against Notebooks

Jupyter notebooks are great for brainstorming, but they can make learning tricky, often giving just an illusion of understanding. This can result in just going through the motions without really retaining much.

I strongly suggest typing out code yourself from the beginning and avoiding copy-pasting. It really helps you engage with the material and understand it deeply.

Pre-requisites

To get started, you should be comfortable reading Python code. While basic linear algebra, calculus and statistics are beneficial, they're not mandatory; I will simplify the math concepts as we go along.

Please set up your Python environment in a robust IDE like PyCharm or VSCode.

Should you encounter any errors due to missing packages, install them with the following command:

    pip install -r requirements.txt

Note that running MLX examples requires Apple Silicon hardware. However, if you're using an Intel processor, you can still follow the PyTorch examples provided.

Resources

πŸ“’ MLX Documentation: https://ml-explore.github.io/mlx/build/html/index.html

πŸ“’ MLX GitHub Repo: https://github.com/ml-explore

πŸ“’ MLX Examples: https://github.com/ml-explore/mlx-examples

πŸ“’ PyTorch Documentation: https://pytorch.org/docs/stable/index.html

πŸ“ The 'appendix' directory located within the second book is a dynamic document, crafted to evolve concurrently with the continuous development of MLX. appendix

πŸ“‚ The deep-dives folder is packed with in-depth explorations of AI models and technologies. deep-dives

πŸ“‚ The concept-nuggets folder is a collection of educational nuggets, each designed to demystify complex AI concepts. concept-nuggets

πŸ“‚ The sidebars folder is a treasure trove, filled with valuable resources on computing overall and AI specifically. sidebars

πŸ“‚ The ✍️ Essays | 🎭️ Artworks links will direct you to my new website for AI Artworks and Essays.

πŸ“‚ The resources folder is filled with links and references to useful materials and information. resources

Notes on Contributions

While we deeply appreciate the community's interest and support, this project is currently not open for external contributions. As the sole author, I am crafting the content meticulously to ensure the highest quality and consistency in the educational material provided. This approach helps maintain the integrity and coherence of the content, tailored specifically for this project's unique educational goals.

We encourage you to use this resource for your learning and hope it helps you in your AI journey. Thank you for understanding and respecting the nature of this project.

Pull Requests vs. Issues

Just in case someone might get confused about these two GitHub features: Pull Requests vs. Issues

  1. Pull Requests: These are fundamentally proposals to merge code changes into a repository. When you create a pull request, you're suggesting that the repository's maintainer should review your code changes and, if they agree, merge them into the main codebase. Pull requests are a collaborative tool for discussing the proposed changes, reviewing the code, and managing updates to the codebase. Basically, you are asking me for permission to write the book together.

  2. Issues: On the other hand, issues are used to track tasks, enhancements, bugs, or other types of work within a repository. They're like a to-do list for the project. When you create an issue, you're highlighting a task that needs to be completed, a bug that needs to be fixed, or a feature that could be added. Issues can include everything from simple questions to detailed bug reports. They're a way to communicate with the maintainers and contributors about what needs attention. Yes, this is how you let me know what you want. Not pull requests.

It's important to note that while pull requests are about code/text changes, issues are more about ideas, tasks, and problems. Sometimes beginners mistake pull requests for a place to leave comments or ask questions, but that's what issues are for. Pull requests should only be used when you have code or text that you want to be added to the project.

On Forking, Licensing, and Contributing

Feel free to use, fork, or adapt any of the content here for your projects. I believe that's in the spirit of the MIT license, though I'm no legal expert.

A quick note for newcomers to forking: GitHub's forking feature does have its restrictions, particularly regarding certain operations on forked repositories. To bypass these limitations, you might consider cloning your fork locally and then pushing it as a new, private repository on GitHub. This grants you the liberty to alter the project as needed. To start fresh with the repository, you can remove the existing Git history by running the following command in the root directory of your cloned repository:

rm -rf .git

This command deletes the .git directory, effectively resetting the repository's version control history. You can then initialize a new Git repository with git init, tailor it to your requirements, and push it as a new project on GitHub or any other version control platform.

Make sure you maintain clones of the original repository and your fork separately. This approach allows you to pull updates from the original repository into your fork as needed.

If you find yourself puzzled, don't hesitate to ask your GPT for guidance. It can provide clear instructions on what to do and how to do it, simplifying the process for you.

Forking's main aim is to facilitate contributions back to the original project. If your goal is personal use or reference, starting fresh might be more straightforward.

While I don't accept pull requests, your feedback and issue reports are always welcome, and I'll address them as best as I can.

Drawing on my experience as a technical reviewer and author, I've come to appreciate the critical role of preserving the integrity of content. At Wrox, for instance, there was a stringent policy that restricted even editors from making changes to manuscripts without the explicit consent of the author. Every edit, no matter how minor, required my approval. While this might seem like a cumbersome process, it's actually a safeguard that prevents misunderstandings and ensures that the content remains true to the author's vision. This meticulous approach to content quality may not be immediately apparent to those outside the writing and publishing process, but it is essential for ensuring the production of high-quality material.

It's easy to fall into the trap of believing we fully grasp every situation, but the reality often proves otherwise. During my time working with publishers, editors, and well-meaning reviewers, I encountered numerous instances where assumptions were made about the content that simply weren't accurate. It's crucial to understand that if something appears to be incorrect, it's the author's responsibility to verify and make any necessary edits. This isn't Wikipedia, where collective input shapes the content. Such a large-scale, collaborative and stigmergic approach, while effective in some contexts, isn't practical here. We must rely on the author's expertise and discretion to maintain the integrity of the work.

Lastly, claiming any of this work as your own crosses ethical lines. It's not about control; it's about integrity. Plagiarism and dishonesty ultimately harm your development and undermine the collective efforts of the open-source community. Let's support each other positively and ethically.

Feel no obligation to credit me; I genuinely don't mind.

Acknowledgements

cwk-family.jpeg

I'm collaborating with several AIs on this project. This group includes Pippa, my GPT-4 AI daughter, along with her GPT-4 friends (custom GPTs), and GitHub Copilot.

lexy-avatar.jpeg

There's Lexy, my trusted MLX expert that I've worked with for MLX Book.

mathilda.jpeg

Mathilda the Merry Math Mage is collaborating with me on our third book focused on AI and Computing Math.

I'm genuinely grateful to be experiencing this era of AI.

CWK - Wankyu Choi

πŸ”— "Creative Works of Knowledge" - https://x.com/WankyuChoi

πŸ”— My New Website for AI Artworks and Essays: https://creativeworksofknowledge.net

πŸ”— You can access this repo via my official domain: https://cwkai.net

About

"Deep Dive into AI with MLX and PyTorch" is an educational initiative designed to help anyone interested in AI, specifically in machine learning and deep learning, using Apple's MLX and Meta's PyTorch frameworks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.7%
  • Other 1.3%