Skip to main content

Welcome to JojoJubah

AI, Tech & YouTube Tutorials — Learn, Build, and Create with me.

About JojoJubah

Hi, I'm JojoJubah ;) I'm not an expert but I enjoy sharing fun tips and tricks about AI, Tech & Economics.

I create fun tutorials and guides, so anyone can level up their skills and join the new wave of artificial intelligence.

Subscribe and follow for unique tutorials, coding projects, and the occasional meme.

Learn

What is Prompt Engineering?

Prompt engineering is the craft of structuring inputs to get reliable outputs from AI models. It's part instruction design, part debugging. Good prompts are clear, specific, and include examples.

Follow-up prompting (why it works)

LLMs refine answers when you add constraints or feedback. Ask the model to "improve this," "explain your reasoning briefly," or "compare options in a table." Iteration > one-shot perfection.

Few-shot examples (when to use)

When your task has a pattern, show 1—3 short input→output examples. Keep style consistent and label clearly. This anchors the model and reduces randomness.

Context windows & chunking

Long inputs get truncated. Prioritize: brief instructions → constraints → examples → source snippets. If content is large, chunk it and summarize progressively.

What is Context Engineering?

Context engineering is structuring the right info around your prompt so the model has what it needs (and nothing it doesn't). Think: goal → constraints → key facts → examples → attachments.

Core moves

• Be explicit about the outcome and format.
• Include only the relevant facts (links/files beat giant pasted walls).
• Ask the model to ask you clarifying questions first if anything is ambiguous.
• For long sources, chunk + summarize rather than dumping everything.

What is an AI Agent?

An agent is an LLM that can plan steps and use tools (like a browser, code runner, or files) to reach a goal. Think: "goal → plan → act → observe → iterate."

Tools, memory, planning

Tools extend capabilities (search, code, APIs). Short-term memory tracks the session. Long-term memory can store facts or preferences. Planners break big tasks into small executable steps.

Risks & guardrails

Agents can loop, hallucinate, or act out of scope. Use sandboxing, confirmation prompts for risky actions, strict tool schemas, and time/step limits.

What does "open-source LLM" mean?

Weights + code are available under a license you can run locally. Great for privacy, customization, and cost control—trade-off: may need more setup/tuning.

Popular local tools

Ollama (easy model pulls), LM Studio (GUI), Text-Gen WebUI (power user). Use GGUF quantized models for lower RAM.

Running on modest hardware

Start with 7B—8B models, use 4—8-bit quantization, prefer CPU-friendly backends, and keep context short. Cache prompts and chunk data for speed.

Featured Projects

Build a Custom Youtube Download App (Easy Method)

Download YouTube MP3's in seconds using the power of AI - built with Batch and explained in a full tutorial!

Watch Tutorial

Improve Your LLM Responses With this trick

Use this follow up prompting technique to Improve your responses from AI Step-by-step on my channel.

Watch Short

This LLM AI App works offline!

No Account needed, completely private LLM chats, works offline - large selection of models to choose from.

Watch Short

Economics

I'm collecting practical economics notes: simple explanations, mental models, and "why it matters" breakdowns for creators and builders. This will grow over time.

Want the deeper dives? Open the dedicated page.

Contact & Socials

Email: (email me)
Location: London, UK (remote, open for collabs!)
🎉 Level up! You opened all topics.
Jojo Assistant

Need a hand?