As a front-end developer constantly seeking to enhance productivity, I've spent the last several months integrating AI tools into my daily workflow. Cursor.ai has become my editor of choice, while Claude AI serves as my coding companion both through Cursor's interface and as a standalone web application. To complete my AI toolkit, I've incorporated Midjourney for visual design tasks including wireframing and UI theming.
This article is the first in a series where I'll candidly share both the challenges and benefits I've encountered. Before exploring the potential advantages in subsequent articles, I want to first address the very real limitations and pitfalls of these AI tools. Understanding these constraints is essential before incorporating AI assistants into your professional development process.
Disclaimer
Using AI tools can potentially expose sensitive information about your organization—not just code, but also structural and personal details. Your API keys and other confidential information could theoretically be stored or trained on, along with potentially sensitive data about users, team members, and other exploitable information.
When using any AI tool, always operate under the assumption that the information you share is neither sensitive nor confidential.
Cursor.ai
Cursor provides settings that allow you to control what information is sent externally for AI assistance. Take time to review these settings and ensure you've marked certain filters and files to prevent them from being sent to the server. Additionally, check the code retention settings in your Cursor account to manage how long your code queries are stored.
Claude.ai
While using Claude.ai through a browser gives you more control over the information you share (since you manually provide the code you want analyzed or modified), remember that your code is still retained on Claude's servers for up to 30 days. Therefore, it's crucial to avoid sharing sensitive structures or information.
Clarification
Throughout this article, when I refer to Cursor.ai, I'm specifically discussing the Claude.ai agent or chat functionality within Cursor, not GPT. When I mention Claude.ai, I'm referring to the browser version of the service.
Over-using AI tools
As we will explore in these articles, there are compelling reasons not to overly rely on AI tools for code creation. AI should be used strategically—as a tool to automate mundane tasks, gather insights, and serve as a sparring partner. Excessive dependence on AI to write and fix your code will ultimately create problems for both you and your organization.
Challenges of Using AI
Despite the considerable benefits and hype surrounding AI tools, I want to first address some challenges I've encountered. These experiences are genuine and important to understand before exploring how AI has positively impacted my work. In the follow-up article, I'll cover the advantages of working with AI.

Over-reach
One of the most common issues I've encountered is Claude going beyond my original request. For example, if I ask it to swap out the images used in these mock files with this list of images (img1.png, img2.png)
and provide context like mock1.json mock2.json
, about 40% of the time it will not only update the images but also start modifying unrelated content within the JSON files.
This problem becomes even more significant during large multi-file refactors, where the AI might begin removing existing functionality completely unrelated to your original request. I call this phenomenon "over-reach"—Claude identifies what it perceives as additional problems and exceeds the scope of what it was asked to do.
Hidden cost: Over-reach impacts your requests, particularly when using the "agent" for multi-file refactors. If the AI begins over-reaching, it will make additional unnecessary requests. When you notice it doing more than intended, immediately cancel the request and provide clearer instructions.

Conflicting Information in Your Codebase
Recently, I revisited a codebase I had built months ago to refactor the back-end APIs, condensing information and optimizing resources. My typical workflow involves writing and testing code first, then updating documentation and readme files afterward.
This approach created challenges when using Cursor. When I asked it to create a simplified API based on my recently modified APIs, it had access to the entire API structure, back-end utilities, mock data, and crucially, the outdated readme file at the root of the project.
Rewrite the consumer API to now return the updated schema
- consumer.api.js, _mocks/_data.json, readme.md
The result was a confusing mix of recent and old functionality. Cursor placed excessive weight on the outdated readme file rather than prioritizing the code I had recently changed.
This issue occasionally occurs with inline documentation as well—I've experienced instances where Claude rewrote code because the documentation above the function was outdated. More frequently, however, due to its tendency to over-reach, it would correct the documentation to match the functionality.
The approach should be more scoped to ensure the specific context is provided: Rewrite the consumer API to now return the updated schema
- consumer.api.js[200-700], _mocks/_data.json, readme.md[244-450]
. This command scopes the context of the query to the specific file lines, rather than the entire context of the file(s).

Context Bloat
A significant issue that substantially impacts the quality of AI interactions is what I call "context bloat"—essentially providing too much information in a single request. There are three primary ways to interact with the agent or chat:
- Asking direct questions and having an ongoing dialogue
- Making requests with the context of specific files or lines
- Asking questions with the entire codebase as context (using Ctrl + Enter or Cmd + Enter)
When too much information is included in a request, "noise" easily enters the conversation, resulting in diminishing quality over time. Large files, numerous small files, or combinations of both quickly create bloat, which worsens when combined with extended Chat histories. As a result, critical parts of your request may be ignored, over-reach becomes more likely, and some files might be completely overlooked.
For example, if I provide 15 mock data files that all share the same issue URLs needing "https://" instead of "http://"
and ask the AI to fix this rather than doing a simple search and replace, Cursor almost never catches all affected files in one pass. It typically requires multiple additional queries to finally address all instances of the problem.
The better approach if possible (for your problem), is to instead ask Cursor to provide scripts or commands to address the issue in one pass: provide me a script that will go through my data/*.json files and correct the image paths to be absolute and https, likewise make sure the images are valid, if not, provide a fallback working image
.
This approach provides a solution that will not require AI requests, additional passes and prevent additional errors or thing's being missed. The solution will typically be tied to your project codebase and language as well, so for me it will be a nodejs script.

Memory Limitations
Another challenge when working with AI is the Ask / Chat memory, which can be further compromised by context bloat. Even when working on single-file solutions, memory limitations can quickly become problematic.
Consider a scenario where you're trying to solve an issue with a canvas element, adjusting panning, zooming, and scaling of elements within the canvas. If the issue isn't resolved after a few attempts but shows improvement, you might continue iterating. However, as the conversation progresses, the AI may begin to ignore previous fixes and improvements, potentially leading to regression rather than progress.
This happens because the context of previous fixes no longer applies to new queries as the conversation extends. In complex problem-solving scenarios with longer Chat / Ask histories, you might notice significant backward steps in solutions after just 15-30 minutes of back-and-forth communication.
Solutions
Solutions
There are no perfect solutions to these challenges. I've come to treat AI as I would a junior developer on my team—considering both its benefits and limitations. You can explain a problem to a junior developer and let them attempt a solution, but you know it will likely need refinement and further guidance afterward.
Overwhelming anyone—human or AI—with too much information leads to mixed results, just as providing unclear instructions yields poor outcomes.
Limit codebase context
Full codebase context currently offers limited value. Instead, focus on including only specific files and lines relevant to your task, rather than entire files and folders.
Consider indirect approaches
Sometimes it's more effective to have the AI help you build a script to perform a specific task rather than asking it to do the task directly. I've had Cursor generate scripts that process my codebase and make targeted changes, which produced better results than having the AI attempt the same task directly.
Rebuild rather than refactor
In many cases, asking the AI to build a component or functionality from scratch yields better results than requesting a refactor. The refactoring process often introduces more noise and bloat, leading to suboptimal outcomes. Starting fresh while referencing specific lines and data structures often produces cleaner, more effective results.
Maintain focus in conversations
Keep Ask / Chat or Agent interactions focused on specific tasks. If you're making iterative changes, copy and paste the initial goal of the chat in subsequent prompts to maintain consistent context.
Provide comprehensive context
Remember that your mental context differs from the AI's understanding. You likely possess knowledge of specific details about your code that provide important context for addressing problems. Don't assume the AI knows what you know—explain problems in detail initially to help it provide more accurate solutions.
Chat / Ask vs Agent
Remember to use Chat / Ask for questions, discussions and insight and use Agent for code changes.