Back
·5 min read

From Ayalon to Production: How I Manage This Blog from My Phone (Without Opening a Laptop)

Full Disclosure

The post you are reading right now? It wasn't written in front of a keyboard. It started as a voice conversation with Gemini while driving, moved to a Custom GPT for execution, from there to GitHub Copilot for polishing, and finally to production on Vercel. Without opening a laptop even once.

The Problem: Context Switching

I discovered that it's easy for me to talk about ideas, but when I need to turn them into written words I get stuck.

The reality is that I spend quite a bit of time driving, and using the Voice Mode of AI models has become second nature to me while behind the wheel. I talk to them, consult, think out loud.

The problem I identified? It's not the writing itself, but the Context Switching. The transition from "idea" to "text editing" is where the idea gets missed.

So I decided to build a mechanism that bypasses this technical friction and allows me to turn a natural conversation into a blog post, without breaking the flow of thought.

I built an autonomous workflow with human-in-the-loop: a chain of tools working with each other, with me acting as a supervisor who gives the personal touch and approves the final result.

The Stack: How Does It All Connect?

The goal was to build a working method that doesn't require me to write code And I arrived at something like this:

  • Gemini (Voice): The brain and creativity.
  • Custom GPT: The linguistic editor and integration (to GitHub).
  • GitHub Copilot: The programmer.
  • Vercel: The Preview environment and production.

Why is this Stack so Complex? (Or: Why do I need two chats?)

The ideal? One application. But currently, each tool has a missing piece of the puzzle, forcing me to create a hybrid Flow:

Why not just Gemini? While Gemini is a more accurate model for conversation, Gems don't yet have the ability to run external Actions. It thinks amazingly, but cannot open an Issue.

And why not just ChatGPT? Custom GPT has Actions, but:

  • No Thinking Models: Inside the custom GPT you cannot use models that know how to "think" deeply and build complex arguments.
  • Voice Control doesn't trigger Actions: While driving, I cannot trigger API Actions via voice. The system requires textual confirmation.

Therefore, the division is: Gemini thinks, GPT executes.

Context: GitHub Issues & Custom GPT

All management is done on top of GitHub Issues, this is the Source of Truth of the blog. To generate these Issues easily, I defined a Custom GPT with three simple actions: create, get, update.

It is important to say: The Custom GPT is not mandatory. I could technically open an Issue manually in the GitHub app. But there is something very comfortable about staying in a chat interface. The ability to iterate ("change the second paragraph", "update the title") in front of a bot that does the grunt work for me, keeps me in a flow of conversation instead of managing forms.

Want to build such a GPT yourself? The Guide: danielsinai/github-issues-customgpt

The Workflow: From Ayalon to Production

The process is divided into 5 main stages:

Stage 1: Conversation and Content Creation (Gemini Voice)

I drive, talk to Gemini via voice, and it helps me distill the idea. The raw content is created naturally during the conversation.

Content Transfer

Stage 2: Processing and Management (Custom GPT)

I transfer the content to the Custom GPT and have a conversation with it. It edits and improves the text in real-time, updates the Issue according to comments and changes, until we reach a final version.

Issue Creation

Stage 3: Polishing and Editing (GitHub Copilot)

I simply assign Copilot inside the Issue. It identifies the task, converts the content to an MDX file, arranges tags, and opens a Pull Request.

Opening PR

Stage 4: Preview Check (Vercel)

I receive a link to the Preview Environment, see how the post actually looks on mobile, and approve that everything is fine.

Merge

Stage 5: Publication (Production)

Clicking Squash and Merge in the GitHub app, and the post is live on the blog site.

The Future is Already Here

I admit, currently this Workflow is a bit cumbersome. Moving text between applications is not ideal. But I have no doubt that is where the world is going. These features, the ability of AI to perform actions, the connection to Voice and autonomy under human supervision, will soon become standard in our daily lives.

And it's not just a feeling. Cursor is already talking about Mobile solutions that will allow developing code directly from the iPhone. This strengthens the claim that this is indeed the direction the industry is heading.

The limitations I described here will disappear, and this process will be completely transparent. Until then? I will continue trying to push these tools to the edge, even from the traffic light.

P.S. If you tried to set up something similar, what do you think? Is the world going there? will we soon see people uploading code from the car? Or is it still far off? Send me a message, I'm interested to hear.