The poster's looking for articles, so this recommendation's a bit off the mark. I learned more from participating in a few Kaggle competitions (https://www.kaggle.com/competitions) than I did from reading about AI. Many folks in the community shared their homework, and by learning how to follow their explanations I developed a much more intuitive understanding of the technology. The first competition had a steep learning curve. I felt it was worth it. The application of having a specific goal and the provided datasets made the problem space more tractable.
Not the poster you responded to but I learned quite a bit from kaggle too.
I started from scratch, spent 2-4 hrs per day for 6 months & won a silver in a kaggle NLP competition. Now I use some of it now but not all of it. More than that, I'm quite comfortable with models, understand the costs/benefits/implications etc. I started with Andrew Ng's intro courses, did a bit of fastai, did Karpathy's Zero to Hero fully, all of Kaggle's courses & a few other such things. Kagglers share excellent notebooks and I found them v helpful. Overall I highly recommend this route of learning.
As an aside, does anyone have any ideas about this: there should be an app like an 'auto-RAG' that scrapes RSS feeds and URLs, in addition to ingesting docs, text and content in the normal RAG way. Then you could build AI chat-enabled knowledge resources around specific subjects. Autogenerated summaries and dashboards would provide useful overviews.
I don't think it's a good idea to kepp up to date at a daily/weekly cadence, unless you somehow directly get paid for it. It's like checking stocks daily, it doesn't lead to good investment decisions.
It's better to do it more batchy, like once every 6-12 months or so.
The best place for the latest information isn't tech blogs in my opinion. It's the stable diffusion and local llama subreddits. If you are looking to learn about everything on a fundamental level you need to check out Andrej Karpathy on YouTube. There other some other notable mentions in other people's comments.
Lots of people can get impressive demos up and running, but if you want to run AI products in production, you're going to have to do system evals. System evals make sure your product is doing what it says on the box with unquantifiable qualities.
Machine Learning Mastery (https://machinelearningmastery.com) provides code examples for many of the popular models. For me, seeing and writing code has been helpful in understanding how things work and makes it easier to put new developments in context.
Simon's blog is fragmented because it's, well, a blog. It would be hard to find a better source to "keep updated on things AI" though. He does do longer summary articles sometimes, but mostly he's keeping up with things in real time. The search and tagging systems on his blog work well, too. I suggest you stick his RSS feed in your feed reader, and follow along that way.
Swyx also has a lot of stuff keeping up to date at https://www.latent.space/, including the Latent Space podcast, although tbh I haven't listened to more than one or two episodes.
He was only gone for a few days, IIRC. At any rate, he's back publishing AI related content again, and it looks like all (?) of his old content is back on his YT channel.
As I was building up my understanding/intuition for the internals of transformers + attention, I found 3Blue1Brown's series of videos (specifically on attention) to be super helpful.
Unwind AI would be helpful. They publish daily newsletters on AI as well as tutorials on building apps with step-by-step walkthrough. Super focused on developers. https://www.theunwindai.com/
Every six months or so I write something (often derived from a conference talk) that's more of a "catch up with the latest developments" post - a few of those:
Are you wanting to get into LLMs in particular or something else? I am a software engineer also trying to make headways into so-called "AI", but I have little interest in LLMs. For one, it's suffering from a major hype bubble right now. The second reason is that because of reason one, it has a huge amount of attention from people who study and work on this every day. It's not something I have the time commitment for to compete with that. Lastly, as mentioned, I have no interest in it and my understanding of them leads me to believe they have few interesting applications besides generating a huge amount of noise in society and dumping heat. The Internet, like blogs, articles, and even YouTube, are already being overrun by LLM-generated material that is effectively worthless. I'm not sure of the net positive for LLMs.
For me personally, I prefer to work backwards and then forwards. What I mean by that is that I want to understand the basics and fundamentals first. So, I'm, slowly, trying to bone up on my statistics, probability, and information theory and have targeted machine learning books that also take a fundamental approach. There's no end to books in this realm for neural networks, machine learning, etc., so it's hard to recommend beyond what I've just picked, and I'm just getting started anyway.
reply