Back to Blogs

My Experience with Claude Code

I have always been skeptical of agentic workflows that allow a probabilistic algorithm to scan your files, but with some free credits, it might be worth checking out.

llm

Published on Apr 5, 2026

Updated on May 7, 2026


Preface

Before going in on this, I’m not an anti-AI person, as I believe with good usage, an AI might be extremely helpful in propelling your speed AS LONG AS YOU ARE GOOD YOURSELF. An AI will not make someone bad good, but rather, amplify that jankiness up a thousand time.

Ever since AI came out, I was following the trends but did not dabble or use “heavily” as some other users on the internet might have utilized the tool. Whether in creative fields or development fields, AI has its uses and can be useful in a lot of applications, but have we used it correctly?

What is an LLM, exactly?

I do think that, to utilize an LLM as best as possible, we need to understand exactly what it is. AI, or LLM which is just one application of the AI field as a whole, works on probabilities. There are two phases of an LLM:

  • Training Phase: This is the reason why RAM and GPU prices are crazy high right now. This is the most demanding aspect of the arms race. Hundreds of machines compute, and “nudge” some numbers inside an LLM model to make it more accurate, more useful and overally “better”. I’ll not delve into how those are stored, or how they are nudged, or even tensors and multi-dimensional spaces that are a marvel of maths and engineering.
  • Operation Phase: After numbers have been nudged correctly, an input such as “Hello” when put through, will run a bunch of calculations and head towards a response that LOOKS LIKE what should be replied after a hello.

Heavy lifting word: LOOKS LIKE.

After knowing its nature, you will suddenly know why all these “new” features of LLMs or agents are just a bunch of markdown files. It’s essentially just contextual data, but condensed and frozen, so that it can be cached and viewed by the LLM. Really not black magic if you think about it. Just the ingenuinty of humanity.

What is Claude Code?

Claude Code is basically that chatbot you know (and love or hate), but integrated at a local level. It can read files, write files, or execute commands to see results as its changes are applied. This basically removes the process of copying your file, pasting it to the chatbot, copying the codeblock from the webpage, pasting it in your editor, and rerunning the entire thing to see if it had any errors. That was THE workflow. With a CLI embedded, or an agent, it can alleviate you of that process, and just show you where it wants to add or remove, and your job is to verify that and “accept” or “deny”. Sounds amazing, right?

Did you really not see the last step? Rerunning the entire thing to see if it had any errors. THAT is the step that I’m wary of these CLIs or Agents stuff for. You see all the time about the agents somehow breaking peoples entire toolchain or workspace, just to check or fix something. That screams security vulnerability to me.

I’m sure these companies already thought of it, and also warned their users not to blindly trust and actually verify what the chatbots spew out. It’s rather a user error than the tool’s fault, really. Let’s see if it can live up to the hype as a revolutionary tool that separates it apart from the web app as people claim it is.

The Journey

Chapter 1. The Installation Process

It seems like Gemini CLI gets a well-packaged build in Arch, while Claude Code is an AUR package or a bash script to install which does give off the feeling that Gemini was better than Claude as a first impression. This is not a big problem though, as Claude Code installed with no errors on first try.

extra/gemini-cli 1:0.32.1-1
    Open-source AI agent that brings the power of Gemini directly into your terminal
aur/claude-code 2.1.86-1 (+42 8.30) (Out-of-date: 2026-03-30)
    An agentic coding tool that lives in your terminal

Chapter 2. Light Agentic Workflows

Instead of going full-on with whatever models and skills thing being setup, I would just see if it can work as an enhanced chat bot for this chapter. The software being tested would probably cycle around my small game.

My tasks to give it would probably be continuing building the embedded game engine to the point of being a proof of concept. For this chapter, I want to let it implement a simple rendering logic for colliders (AABB, OBB, Circle and Capsule).

Something that was in fact worthwhile to mention was the ability to self-diagnose common problems in these kinds of agents. Usually after you copy the code from the chatbot, you run the program, hit an error, figure out where the problem is, and fix or have the chatbot fix it for you after pinpointing.

Chapter 3. Custom Status and Skills

Something that I noticed missing was Gemini CLI’s status line or footer on Claude. But as the docs here, we can customize it with jq as much as we want.

Status line

Let’s start writing a Claude skill, with this guide provided by Anthropic. I’ll make it short here as a skill structure:

your-skill-name/
├── SKILL.md # Required - main skill file
├── scripts/ # Optional - executable code
│ ├── process_data.py # Example
│ └── validate.sh # Example
├── references/ # Optional - documentation
│ ├── api-guide.md # Example
│ └── examples/ # Example
└── assets/ # Optional - templates, etc.
  └── report-template.md # Example

Let’s make a simple skill frontmatter. A frontmatter ideally should include the name of the skill and the description:

---
name: your-skill-name
description: What it does. Use when user asks to [specific
phrases].
---

In hindsight though, this seemed like a much better idea than I first thought since I noticed that Gemini CLI gobbled up all the context for build logs (or doesn’t show that it doesn’t), and Claude seems to know how to use tail. Which is rather enough for most cases, but still rather limited as I have seen Claude going in loops to find exactly where the problematic error line is, and that kind of looping does add up.

Retrospective

This wasn’t done as well as I thought I would have done with it to be honest. I haven’t really dug into MCPs, RAG, or sub-agents (I didn’t trust these tools that much to not supervise them at every step).

Though, at the time of writing this (May 7, 2026), Anthropic had already fixed the caching issues that used to cause Claude to eat up 40% of the 5-hour session in just one plan. Although, this didn’t affect me too much as I took almost 2 hours to look over and supervise that plan forming and implementation anyway.