Hacker News Clone new | comments | show | ask | jobs | submit | github repologin
Ask HN: Best practices using AI as an experienced web dev
9 points by thebordella 5 days ago | hide | past | web | 8 comments | favorite





I use LLMs to write "secondary" code. Things like deployment scripts, autoformatters, Github Actions, Dockerfiles or Kubernetes config. Stuff that isn't strictly part of the main web project, but is still valuable to have. I use LLMs there because

a) those tend to be boilerplate, and LLMs are great for boilerplate, and

b) code quality doesn't really matter too much, and

c) those tend to be written in languages that you may not be well-versed in, since they usually aren't in the "primary" language of the project


I've found that I rely most heavily on LLMs when:

1. I'm developing a utility package that's easily testable and I'm certain of the interface. I'll write the interface for the package in my editor, then ask an LLM to generate unit tests. Then, I'll sketch out the function calls/structure of the package and get an LLM to fill out the rest.

2. I'm bug bashing and want to quickly check if the bug is obvious. I'll feed a description of the behavior into GPT/Claude along with the relevant code (generally as little code as possible to prevent listicle-type responses that are irrelevant).

3. I'm adding code that follows an established pattern within the codebase -- for example, adding a new API handler that involves generating an OpenAPI path + component snippet, an http handler, and a few database methods. This is when copilots are particularly useful.

4. I'd like a sanity check for issues in a complex bit of code I've written.

I find these mirror the tasks you'd typically hand off to a less experienced dev on the team -- these are things that require validation based on knowledge that you already have, and the validation is more efficient than doing it yourself.


A new repo "find me the code that..." is an excellent use case

Plumbing, CRUD, looking up & hooking up APIs, docs, etc. Generally speaking, things that are low complexity & low uncertainty & code that you don't want to write yourself.

For experienced devs who already has solid understanding of the codebase they are working on, the potential upside of using AI is rather small. But if jumping into a new codebases, AI (with codebase context) can be used as a semantic search tool, or simply speed up the process of codebase understanding. And when it suggests code, it can surface patterns/conventions that were not documented. Need to be careful though, because it can repeat bad code too.

Disclosure: I'm building EasyCode, a context aware coding assistant.


To effectively use AI as an experienced web developer, consider the following best practices:

Integration of AI Tools: Incorporate AI tools for tasks like code generation, debugging, and optimization. These can significantly save time and enhance productivity.

Testing and Validation: Rigorously test AI models and tools to ensure they perform as expected. This includes checking for biases and ensuring that outputs are reliable and safe for production.

Stay Updated: Continuously learn about new AI technologies and frameworks that can improve your development process. The field is rapidly evolving, and staying informed will help you leverage the latest advancements.

Ethical Considerations: Build responsible applications by considering the ethical implications of AI. Use pre-existing safety mechanisms and be transparent about AI usage in your applications.

Collaboration: Work with other developers and data scientists to share knowledge and best practices, which can lead to more innovative uses of AI in web development.

Always remember to double-check important information and stay informed about the latest trends and technologies in AI and web development.


I started using AI for: documentation unknown APIs quickly without having to scour the Internet for examples and improving documentation. You can send it a few paragraphs and have AI re-write it for you.

I guess I'm an "ancient" then, having done web dev since the first Netscape Navigator. I use ChatGPT all day, every day for mundane tasks:

1. As a Google/Stack replacement, asking complex queries in natural language with many follow-ups. It's really good at helping me understand complex topics step by step, at an appropriate level of detail.

2. To help me with the syntax I can never remember (like different combinations of TypeScript types and generics and explaining what it all means). I feed it three or four types and tell it "I need a fifth that inherits X from here, Y from there, and adds Z, which can be blah blah blah..." and it's really good at doing that and then also teaching me the syntax as it goes.

3. To write in-line JSDoc/TSDoc to make my functions clearer (to other devs). At work we have a largely uncommented codebase, and I try to add a bunch of context on anything I end up working on or refactoring.

4. To farm out some specific function, usually some sort of nested reducer that I hate writing manually or cascading entries of Object.entries() with many layers.

5. Ask it higher-level architectural questions about different frameworks or patterns, and treat it as a semi-informed second opinion that I always double-check.

Generally speaking, it's really pretty good at most of this. I manually read through and verify everything it produces line-by-line and ask it for corrections when I notice them. It's still a lot faster than, say, trying to code review a true junior dev's work. It's not quite as efficient as being able to easily talk shop with another experience dev, but it's rare for me (in my jobs) to have a lot of experienced devs working on the same feature/PR at once anyway, so compared to someone jumping into a branch fresh, ChatGPT is a lot better at picking up the context.

---------

I do NOT:

A) Use an in-IDE AI assistant. Copilot was hit or miss when it came out. It was great at simple things, but introduced subtle flaws in bigger things that I wouldn't always catch until later. It ended up wasting more time than it saved. The Jetbrains AI assistant was even worse. Maybe Claude or Cursor etc are better, I dunno, but I don't really need them. I love Webstorm as it is, without AI, and I can easily alt-tab to ChatGPT to get the answers I need only when I need.

B) Use it to write public-facing documentation. While it can be good at this, public-facing stuff demands a level of accuracy that it can't quite deliver yet. Besides, I really enjoy crafting English and don't want a robot to replace that yet :)

Overall, it's a huge time saver for sure. I expect it to fully replace me someday soon, but for now, we're friends and coworkers :)


I don't use AI to generate code that I already know how to write, or code that I don't know how to write. I use it for that weird gray area of things I SHOULD know, but can't immediately conjure off the top of my head nor find good documentation on. Generally, my process is:

0 - Try to write the code myself, using LSP hints as needed

1 - Read the primary source (man page, documentation, textbook) to find an answer. Upside is that I learn something about related topics along the way by skimming the table of contents

2 - Consult stack overflow/google. This has become less and less useful, as both of those resources have become flooded with garbage info and misleading blog posts in the last several years.

3 - Pull out the AI copilot and ask it for help, while sharing what I already know and what I think the shape of the solution will be.

4 - Actively seek help - talk to colleagues, post a question on a relevant forum, etc...

Is this perfect? No, I've wasted hours in the worst case with an answer that the copilot thought was correct, but was not. But on balance, I'd say that it has saved me many days worth of time in the last year, usually in the form of research and knowledge discovery. It's much faster to test out a bunch of potential solutions when the copilot is writing most of the code and I just tweak a few relevant parameters.

I've been using all the time I've saved on mundane programming by studying computer science from first principles. As someone without a CS degree, I am acutely aware of my gaps in knowledge about theoretical computer science. I consider myself a halfway-decent programmer at this point, so I don't find that filling my head with more and more syntax and esoteric rules about frameworks to be helpful. I'd rather learn what the basis is for all those rules, and reconstruct them myself as needed.

I also have a lot more confidence to branch out from my moneytree (web dev) and try my hand at other areas of programming, like embedded development, messaging, and language theory. This field is endlessly fascinating, and I selfishly want to learn and try it all. So far in my career, I've spent most of it in web dev, but have also been able to test the waters of embedded development for a year, and interesting "back-end" services for another year or so. I even had the confidence to start my own company with a friend, realizing that I could actually shoulder most of the development burden early on if I strategically rely on AI to prompt me through implementation details which I'm not quite an expert in.

This is my strategy for ensuring longevity in this career. I'll admit I'm only on year 8 of programming professionally, but I hope this is the correct attitude to have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: