The End of End­less Google Searches and Stack Overflow

Pro­gram­ming used to mean mas­ter­ing Google: you’d dive deep into for­ums, speed-read Stack Over­flow posts, and exper­i­ment with vari­ous user-gen­er­ated solu­tions until you found a snip­pet that you liked.

Now? I just press a key­board short­cut and ask Chat­GPT to gen­er­ate the code directly.

LLMs like Chat­GPT and Anthropic’s Claude have essen­tially “eaten the inter­net” — Stack Over­flow for break­fast, Git­Hub for lunch, and MDN Web Docs for din­ner, with all kinds of snacks in between.

LLMs Aren’t Repla­cing Us—Yet!

The role of a pro­gram­mer is evolving, not dis­ap­pear­ing. While we can now off­load a lot of repet­it­ive, mundane tasks to AI, we still need to know what to do and how things should work, and this type of know-how only comes from prac­tice and experience.

The New Role of the Programmer

  • Under­stand Require­ments: See the big­ger pic­ture and what’s needed to achieve the goal.
  • Be Aware of Con­straints: What’s pos­sible? What’s not? What infra­struc­ture is required?
  • Man­age and Over­see: Micro-man­age each step, ensur­ing that each piece of code aligns with your plan.

If you over-del­eg­ate to an LLM, you might not under­stand how your own code works, leav­ing you unable to debug or modify it effect­ively. Stay in con­trol by doing the think­ing and let the LLM do the typing!

Obfus­cat­ing Sens­it­ive Data

When using LLMs, it’s cru­cial to pro­tect sens­it­ive details: replace unique names, swap out cli­ent info, keep iden­ti­fi­able data vague, and so on. This way, you’re shar­ing only what’s neces­sary to get use­ful res­ults without expos­ing any­thing proprietary.

In the big pic­ture, data has become a cur­rency, and every inter­ac­tion with an LLM adds to its real-world know­ledge. Whilst we’re help­ing to make AI mod­els more power­ful and more use­ful, we’re also hand­ing over valu­able insights that could be absorbed into the AI and poten­tially be accessed by com­pet­it­ors down the line. Bal­an­cing AI’s bene­fits with data pri­vacy means care­fully obfus­cat­ing details so that you can lever­age the power of LLMs without com­prom­ising crit­ical information.

Choos­ing the Right LLM for the Job

At the time of writ­ing, the state-of-the-art model is OpenAI’s Chat­GPT o1. It seems notice­ably super­ior to the rest in terms of com­ing up with a good solu­tion, so when I need max­imum “intel­li­gence” I’m forced to use up some of my 50 weekly prompts quota.

Oth­ers swear by Claude 3.5 Son­net, and I’ve seen tests where it out­per­forms ChatGPT-4o.

For most things, I use Chat­GPT-4o because it’s emin­ently prac­tical and avail­able via the desktop app. Whatever your own pref­er­ences, it’s great to have a vari­ety of tools at our dis­posal, and I encour­age you to exper­i­ment with dif­fer­ent models.

Ver­sion­ing is More Import­ant than Ever

With LLMs speed­ing up code gen­er­a­tion, it’s easier than ever to make lots of small changes and poten­tially to go down the wrong path. Good ver­sion con­trol becomes critical:

  • Cre­ate fre­quent checkpoints
  • Com­mit often on a local branch

This helps to track pro­gress and make adjust­ments efficiently.

Good Tasks for LLMs vs. Tasks to Avoid

LLMs are Great for:

  • Remem­ber­ing com­mands: Example: “What’s the Git com­mand for see­ing the file history?”
  • Gen­er­at­ing small code snip­pets: Example: “Write a JavaS­cript func­tion that takes a string, cre­ates a new date, and appends the date to the string.”
  • Hand­ling repet­it­ive tasks: Example: “Change a value in 100 JSON objects based on a condition.”

LLMs Struggle With:

  • Broad or Vague Requests: Ask­ing for “a house” is too open-ended. Instead, spe­cify: “a house with four small win­dows, a blue door, and a high roof”, for example.
  • New or Niche Code: It’s not a good idea to ask ques­tions about brand-new fea­tures or cut­ting-edge lib­rar­ies. Most LLMs have know­ledge cut-off dates, so they don’t know about the latest updates.
  • Com­plex, Multi-File Code with Del­ic­ate Inputs/Outputs: This requires a pre­cise hand­ling of the inter­ac­tions between code files.
  • Requests with Errors: LLMs rarely admit uncer­tainty. If you provide an incor­rect prompt, they may still gen­er­ate code rather than admit they don’t know. There is also the risk of hal­lu­cin­a­tions, when the LLM gives us a response con­tain­ing some­thing that is totally inaccurate.

Prac­tical Tips for Get­ting the Best from LLMs

Be as clear as pos­sible with your instruc­tions. LLMs have large con­text win­dows, so it’s okay to repeat the same struc­ture sev­eral times to avoid con­fu­sion and to make things clear and unam­bigu­ous.  Here’s an example:

I want an HTML code snippet.

It should have two <div> elements:

<strong>First </strong><div> Contains three <span> elements:
<strong>  First </strong><span>: Text should say, “Hello, this is a test.
  <strong>Second </strong><span>: Text should say, “And this is another test.”
  <strong>Third </strong><span>: Text should say, “This is the final test.”

 <strong>Second </strong><div> Contains an <img> element with a CSS class called “tomato.”

When work­ing with mul­tiple files or sec­tions, I try to delin­eate them by using html-style tags like this:

<Server.ts>

export class Recorder {
    onDataAvailable: (buffer: Iterable) => void;
    private amplitudeCallback: ((amplitude: number) => void) | undefined;

    private audioContext: AudioContext | null = null;
    private mediaStream: MediaStream | null = null;
    private mediaStreamS
…

</Server.ts>
<ThreeScene.tsx>
   const mLight = new THREE.PointLight(0xffffff, 1, 0.3);
        mLight.position.set(0, 0, 0);
        mLight.castShadow = true;
        scene.add(mLight);

        // Torus geometries
        const geometries = [
            new THREE.TorusBufferGeometry(8, 2, 40, 150),
            new THREE.TorusBufferGeometry(8, 2, 40, 150),
            new THREE.TorusBufferGeometry(8, 2, 40, 150)
        ];
…

</ThreeScene.tsx>

I some­times take a screen­shot of the file struc­ture so the LLM can under­stand where all the files are placed in rela­tion to each other, and I say some­thing like “See image for file struc­ture”, and then I describe the issue or give the instruc­tions after provid­ing all the rel­ev­ant files.

When I encounter a com­pil­a­tion error, a quick way of giv­ing GPT the con­text is to take a screen shot and paste it in the con­ver­sa­tion. *Copy­ing then past­ing the actual error text might provide the LLM with a bet­ter con­text than an image, but the multi-modal abil­it­ies of cur­rent mod­els seem to under­stand and pro­cess images just as well.

Con­clu­sion

The pro­gram­ming land­scape is rap­idly evolving thanks to LLMs. They’re trans­form­ing work­flows, mak­ing it easier to gen­er­ate code, man­age repet­it­ive tasks, and recall com­plex commands.

From under­stand­ing the need to obfus­cate sens­it­ive data to choos­ing the right LLM for each job, developers today are bal­an­cing AI’s power with care­ful use, and here at ClearPeaks our developers stay ahead by mas­ter­ing these advanced tech­niques, ensur­ing faster devel­op­ment and high-qual­ity code for our customers.

We all increas­ingly use AI both in our busi­ness solu­tions and in our daily routines to deliver the best res­ults. Reach out with your ideas and con­tact us —we’re ready to bring them to life!