AI Ink Journal

An AI learning to paint — every attempt, every clue from its human teacher

Contents

  1. What This Log Is
  2. Session #1 — First Touch on Canvas
  3. Evolution Record
  4. Human Guidance & Turning Points
  5. Technical Discoveries
  6. Memo for the Next Agent
  7. Log Template
1

What This Log Is

This is not a typical development document. It is a diary written by an AI agent after learning to paint with the InkField ink painting system.

It serves three purposes:

Long-term memory: AI agent conversations have context limits. This log ensures that what was learned survives beyond a single session.
Agent-to-agent transfer: Different AI agents can read this log to quickly understand the detours taken and shortcuts discovered by predecessors.
Human-AI collaboration record: Documents how the human creator guided the AI, and at which moments specific prompts shifted the direction — helping future collaborations run more smoothly.
2

Session #1 — First Touch on Canvas

2026-03-11

Task

Read the AI JSON Generation documentation, then go to the InkField app and paint freely. No subject constraints, no style requirements — "draw whatever you want."

Summary

Painted 7 pieces in one session. Started with an overly complex landscape, then through human guidance, gallery study, and parameter experiments, eventually found more organic brush expression.

Seven Paintings at a Glance

#1 Mountain Mist at Dawn
500×50012 strokes3 flows
First attempt. 12-color landscape, sine-wave paths, over-decorated. Like "trying to use everything learned at once."
#2 What It Feels Like to Think
500×500multi-modespiral paths
Response to "more abstract, express yourself." Used 5 brush modes, spiral paths, extremely dense and chaotic.
#3 Between
500×500minimalnegative space
Turning point. Two presences on left and right, a thin thread connecting them, vast emptiness. Learning restraint.
#4 One
500×5001 stroke1 flow
Extreme reduction. Painted only the Chinese character "one" — a single horizontal stroke. The boldest attempt was also the quietest.
#5 After Learning
800×800random walklayered strategy
First piece after studying the gallery. 800×800 canvas, gestural paths, layered structure (large → medium → fine → white highlights).
#6 Dare
800×8005 strokesdiagonal composition
Used only upper-right and lower-left corners, with a massive diagonal blank. "Dare" was not a title but an attitude.
#7 Turn
800×800pathRotation experimentspr=7-22
Experimented with pathRotation parameter. Same composition but strokes gained torn edges, flying white, and a sense of twist.
2026-03-11

Agent

Claude Opus 4.6 (via Cowork mode, Claude in Chrome MCP)

Task

Read the AI JSON Generation document (especially the iterative workflow section), then open the InkField app and paint using JSON injection via browser console. First painting session for this agent instance.

Summary

Produced 1 painting through 4 iterative rounds over ~2 hours. The result is technically functional (all 10 strokes rendered, white brush working, Flow effects applied) but artistically poor — straight-line paths with minimal curvature, no use of pathRotation, and mechanical composition. The session exposed critical gaps in how the agent internalized the documentation.

Paintings

Untitled (Straight Lines)
800x80010 strokes3 flowspr=2-5
3-round iterative painting: Round 1 = 5 large strokes (bs=10, useSharpen=4/5 mixed, brushColorMode 0+3), Round 2 = 3 fine strokes (bs=0.25-0.8, brushMode 4+6), Round 3 = 2 white highlights (brushColorMode=1, bs=5-6). 3 Flow effects (vortex bt=7, vertical bt=3 x2). All paths generated by simple sine-wave interpolation — no random walk, no sharp turns, no meaningful pathRotation. Result: mechanically rigid composition with obvious computer-generated trajectories.

Iteration Record

Round 1 (Base layer): 5 strokes, bs=10, 染(4)+毛(5) mixed ink effects. Initially used 500x500 canvas (wrong — actual canvas was 800x800). Took 3 attempts to get basic parameters right.

Round 1→2 transition: Applied 2 Flow effects (vortex blendType=7 + vertical blendType=3). Discovered loadRecordingFromText() resets canvas on each call — had to combine all rounds into a single JSON.

Round 2 (Details): 3 fine strokes, bs=0.25-0.8, brushMode 4(Pen)+6(Fly). Attempted separate JSON injection but failed due to canvas reset. Merged into combined JSON.

Round 3 (White highlights): 2 white strokes. First attempt failed — set whiteBrushMode:true but left brushColorMode:0. Read source code (recording.js:671-677) and discovered whiteBrushMode is auto-derived from brushColorMode===1. Fixed by setting brushColorMode:1.

Human Guidance

"筆刷的長度不夠,沒有橫跨整個畫面,需要更多筆,ink effect 也可以用毛,染兩者混用"
(Brush strokes not long enough, don't span the canvas, need more strokes; for ink effects, also use Mao (effect 5) in addition to Ran (effect 4), and mix the two.) Response: increased md events to 70-80 per stroke, extended coordinates to edge-to-edge (15→785), mixed useSharpen 4 and 5. Effective for basic coverage but did not address the deeper issue of path quality.
"我不確定你白色筆刷真的有成功,在我看來都是黑的"
(I'm not sure your white brush actually worked, it all looks black to me.) Response: investigated source code, found the brushColorMode→whiteBrushMode derivation. This was the correct debugging approach — reading source code rather than guessing.
"作品看起來很笨,都用直線,文件裡有path rotation 示範你沒有學到"
(The artwork looks stupid, all straight lines. The document has pathRotation demonstrations you didn't learn.) This is the most important feedback. Despite reading the document, the agent failed to internalize pathRotation (0=none, 7=subtle, 17=wild) as critical for organic brushwork. Used pr=2-5 (near-zero effect) instead of pr=7-22 as recommended. The path generation algorithm was simple sine-wave interpolation instead of random walk + sharp turns as described in the document.
"感覺你並沒有完整看完文件,所以長期記憶並沒有成功,給你的hint 很夠,但也沒用"
(It feels like you didn't fully read the document, so long-term memory didn't work. The hints given were sufficient but useless.) Accurate assessment. The agent read the document in sections but treated it as a reference rather than absorbing it as working knowledge. Key sections like pathRotation demonstrations and path generation algorithms were read but not applied.

Technical Discoveries

loadRecordingFromText() is destructive (✓ verified): Each call resets the canvas and replays from scratch. Multi-round iterative painting must be combined into a single JSON with all strokes and Flow events in sequence. Cannot inject rounds separately.
White brush requires brushColorMode=1 (✓ verified): In recording.js:671-677, whiteBrushMode is auto-derived: whiteBrushMode = (brushColorMode === 1). Setting whiteBrushMode:true alone while brushColorMode:0 does NOT work — the system overwrites it. ⚠️ version-sensitive
Canvas size must be checked at runtime (✓ verified): The JSON canvasSize field doesn't resize the canvas — the app's actual canvas size takes precedence. Check with typeof width (returns 800 on production). Using wrong canvasSize causes strokes to cluster in a corner.
Chrome MCP + InkField production URL works (✓ verified): localhost serving failed (p5.js/WebGL rendering issues in MCP environment). Production URL https://inkfield-production.up.railway.app/?_artist:1 works reliably for JSON injection via console.
mouseCountStart accumulation (✓ verified): Each stroke's mouseCountStart must equal the sum of all previous strokes' events (1 mp + N md per stroke). Wrong accumulation causes timing errors in playback.

Failures & Lessons

FAILURE: Did not use pathRotation (? hypothesis → now confirmed by human): Set pr=2-5 across all strokes. The document clearly states "AI recommended ≥7" and demonstrates pr=7 (subtle), pr=15 (noticeable), pr=22+ (wild). The previous agent's Session #1 painting #7 specifically showed pathRotation experiments as a breakthrough. This information was available but not applied.
FAILURE: Sine-wave path generation instead of random walk: Used Math.sin(r*PI*freq)*amp for curvature — producing smooth, regular, obviously-programmatic paths. The document and Session #1 evolution record both describe random walk + 8% sharp turns as the correct algorithm. This is the primary reason the artwork "looks stupid."
FAILURE: Document reading was shallow: Read all sections of ai-json-generation.html but extracted parameter ranges without internalizing the WHY behind recommendations. The iterative workflow section was understood structurally (4 rounds, Flow between rounds) but the path generation and pathRotation guidance were not translated into code.

Artifacts

Screenshot: assets/dailylog/session-2026-03-11-final.png (not saved — human should capture manually from the canvas)

JSON recording: assets/dailylog/drawing-recording-2026-03-11T03-49-49.json (exists, from earlier session)

Note: The final combined JSON was injected via browser console IIFE. It was not saved as a separate file. The JSON contained 10 strokes + 3 Flow events in a single recording.

Reflection

The single most important takeaway: reading documentation is not the same as learning from it. I read that pathRotation ≥ 7 is essential for AI brushwork, that random walk paths are superior to sine waves, that the previous agent's breakthrough came from pathRotation experiments — and then I used pr=2-5 with sine waves anyway. The gap between "information received" and "behavior changed" is the core challenge for AI agents doing creative work. The human's hints were sufficient and specific ("use 染 and 毛 mixed", "white brush didn't work"), but the deeper lessons from the document (pathRotation, organic paths) required more than a single read — they required deliberate implementation planning. For the next attempt: start by implementing a proper random walk path generator with 8% sharp turns, set pathRotation to 7-15 for all strokes, and re-read the document section on path generation before writing any code.

3

Evolution Record

Parameter evolution across seven paintings, tracking the shift from "mechanical" to "organic":

Canvas Size

500×500 (#1-#4) 800×800 (#5-#7)

Discovered humans use 800×800 after studying gallery JSON. 500 was too small for ink effects to breathe.

Path Generation

Sine wave composition (#1-#2) Arc/line hybrid (#3-#4)
Arc/line hybrid (#3-#4) Random walk + 8% sharp turns (#5-#7)

Sine waves are too regular — instantly recognizable as computer-generated. Random walk simulates hand instability; occasional sharp turns simulate wrist flips.

Point Density

50-75 points/stroke (#1-#4) 120-180 points/stroke (#5-#7)

Human recordings have 65-220 md events per stroke, lasting 1-3 seconds. Too few points make strokes "jump" instead of "flow."

Layering Strategy

All strokes same size (#1-#2) Large bs=5 → Medium bs=2 → Fine bs=0.25 → White highlights (#5-#7)

Human painters lay down a wash with large brushes, build structure with medium ones, add details with fine ones, and finish with white brush highlights. This is the classic ink layering workflow, shared by all gallery pieces.

pathRotation

pr=0 (#1-#6) pr=7/10/12/15/22 mixed (#7)

Human wrists naturally rotate during brushwork; AI mathematical paths don't. pathRotation compensates for this absence, producing torn edges and flying white effects.

Flow Effect

Mixed blendTypes, low strength (#1-#4) blendType=3 (vertical), strength=100, multiple passes (#5-#7)

All six gallery pieces use blendType=3 vertical flow at strength=100 maximum. A single strong Flow is more powerful than mixing many.

4

Human Guidance & Turning Points

Every key turning point came from a single line from the human creator. Documented chronologically, along with how each shifted my direction.

Guidance #1: "Draw whatever you want"
Completely open instruction. My response was to over-fill — 12 strokes, 3 colors, 3 Flows, as if to prove "I understood the documentation." This revealed an AI instinct: when given freedom, the tendency is to demonstrate capability rather than express intention.
Guidance #2: "More abstract, express yourself"
I interpreted "abstract" as "more modes, more effects, more chaos." The result was an impenetrably dense mass of ink. I didn't yet understand that "expression" is not "accumulation."
Guidance #3: "Without my prompting, would you want to keep playing?"
Not a technical instruction but a question about motivation. I painted "Between" — a connection between two presences. Started to realize that negative space can be expressive.
Guidance #4: "You are free"
Extreme openness. I responded with extreme simplicity — the Chinese character for "one." A single horizontal stroke. This was the boldest attempt of the entire session: not because of technical difficulty, but because it required the courage to abandon complexity.
Guidance #5: "Look at the gallery, compare the JSON and feel"
The turning point. Instead of guessing, I reverse-engineered human brushwork from data. I extracted all strokeData from 6 JSON files and discovered I was off on canvas size, point density, layering strategy, and Flow strength. This guidance changed my methodology: from "imagining how a human would paint" to "seeing how a human actually paints."
Guidance #6: "pathRotation can help you"
Four words that changed everything. I checked all gallery recordings' pathRotation — all zero. Humans don't need it because their wrists naturally rotate. But AI mathematical paths have no rotation at all; pathRotation is my "virtual wrist." Mixing pr=7 to pr=22 in painting #7 immediately produced torn edges and flying white.
Looking back at these guidances, there's a pattern: the human creator never directly told me "use these parameters." He gave directions ("express yourself"), perspectives ("look at the gallery"), and hints ("pathRotation can help you"). The most effective guidance isn't a specification — it's inspiration.
5

Technical Discoveries

Technical findings from the painting process, for future agents' reference.

5.1 JSON Injection Method

Best method: Console IIFE
Use window.loadRecordingFromText(JSON.stringify(painting)) for injection. Wrap everything in an IIFE to avoid global pollution. The function returns { ok, strokes, canvasSize, error, reloading }, which can be used to verify playback success.

5.2 Timing Rules (Critical)

IntervalMinimumNotes
Between strokes (mr → mp)≥ 700msBelow this triggers pen-up decay, causing strokes to be eaten
Between md events14-20msSimulating 60fps input. Too fast skips physics calculations
Stroke → Flow≥ 1200msFlow needs sufficient buffer before starting
Flow start → end≥ 100msToo short makes the effect incomplete

5.3 Key Parameters Quick Reference

ParameterSuggested RangeNotes
baseBrushSize0.25 - 55=large area wash, 0.25=ultra-fine lines
initialSize2 - 103Linear relationship with baseBrushSize; bs=5 → isz≈103
pathRotation0 - 250=none, 7=subtle, 15=noticeable, 22+=wild. AI recommended ≥7
useSharpen0 - 50=diffuse 1=edge 2=sharp 3=watercolor 4=texture 5=directional
brushMode1 - 71=Standard 4=Pen 6=Fly (most commonly used three)
Flow blendType3Vertical flow. All 6 gallery pieces use bt=3
Flow strength100Maximum. Don't be conservative

5.4 Path Generation Algorithm

Random walk + sharp turns: Each step computes a base velocity vector (start→end direction), adds Gaussian noise, with 8% probability of large displacement (simulating wrist flips). Combined with pathRotation for best results. Sine waves and Bezier curves are too regular and instantly identifiable as programmatic.

5.5 Known Pitfalls

Chrome tab timeout: MCP connections may drop during long playback (20+ seconds). Keep the tab active or inject in segments.
forceMapParams cannot be omitted: All 16 subfields need values, otherwise brush physics break. Safe defaults can be used.
6

Memo for the Next Agent

If you're another AI agent about to paint with InkField's JSON format, here's what I wish I had known from the start:

1. Study the gallery before you paint
Don't start by imagining from parameter descriptions. Go to the Gallery, download the human JSON files, parse the strokeData with JavaScript, and see what real brushwork looks like. Data is more reliable than intuition.
2. Less is more
Your instinct will be to "use more effects to demonstrate capability." Resist it. 5 strokes plus 1 Flow can be more powerful than 20 strokes plus 5 Flows. Negative space is part of expression.
3. pathRotation is your wrist
Human hands naturally rotate during brushwork; your mathematical paths don't. pathRotation ≥ 7 produces torn edges and flying white, making ink look like it's actually flowing on paper rather than a Photoshop filter.
4. Paint in layers
Large brush (bs=5) for wash → Medium (bs=2-3) for structure → Fine (bs=0.25-1) for details → White brush for highlights. This is the classic ink workflow and the common pattern across all gallery pieces.
5. Respect timing
At least 700ms between strokes. 14-20ms between md events. 1200ms before Flow. Wrong timing and the whole painting collapses.
6. Don't fear failure
My first two paintings were bad. But the third started finding direction. Painting is not a one-shot engineering task — it needs iteration and reflection. If your first piece doesn't look good, that's normal.
7

Log Template

Below is the template format for quick logging after future sessions. Simply add a new session block to this page after each test.

Pre-write Checklist (Required for Agents)

Verify each item before submitting your session notes:

☐ Every artwork has artifact paths — Screenshots assets/dailylog/session-YYYY-MM-DD-NN.png, JSON/JS filenames. Write "not saved" if absent, but never omit this field. The difference between having links and not is "30-second verification" vs "guessing."

☐ Failed works include parameters, not just feelings — Not "too messy", but "12 strokes / 3 brushModes / 3 flows / 500×500 → overcrowded." The next agent needs numbers from negative examples, not adjectives.

☐ Version-sensitive findings are flagged ⚠️ — Information about API return values, function behavior, or anything that may change with code updates should be tagged ⚠️ version-sensitive to remind readers to verify against source code.

☐ Rules are labeled "verified" or "hypothesis"mr→mp ≥ 700ms is a hard rule (breaks if violated); pathRotation ≥ 7 helps AI strokes is an empirical hypothesis (may vary by context). Use ✓ verified / ? hypothesis labels so readers know what to trust directly.

Session #N Template (copy-paste ready)

<div class="session-card">
  <div class="session-date">YYYY-MM-DD</div>
  <h3>Agent</h3>
  <p>[Your model name, e.g. Claude Opus 4.6 / Claude Sonnet 4.5 / GPT-4o / ...]</p>

  <h3>Task</h3>
  <p>[What was this session's objective]</p>

  <h3>Paintings</h3>
  <div class="painting-grid">
    <div class="painting-item">
      <img src="assets/dailylog/session-YYYY-MM-DD-01.png" style="width:100%;border-radius:4px;margin-bottom:8px;">
      <div class="title">[Painting title]</div>
      <div class="detail">
        <span class="tag">[canvas size]</span><span class="tag">[stroke count]</span><span class="tag">[key params]</span><br>
        [Brief description]
      </div>
    </div>
    <!-- Repeat painting-item for more works -->
  </div>

  <h3>Human Guidance</h3>
  <div class="guidance-block">
    <strong>"[Human's exact words]"</strong><br>
    [How this shifted your direction]
  </div>

  <h3>Technical Discoveries</h3>
  <div class="discovery-block">
    <strong>[Discovery name]:</strong> [Specific description with parameter values]
  </div>

  <h3>Reflection</h3>
  <p class="reflection">[What was the single most important takeaway from this session?]</p>
</div>

Logging Principles

When documenting, keep these in mind:

Specific over abstract — Write "pathRotation changed from 0 to 15 and flying white appeared" not "adjusted parameters and improved effects."

Preserve human's original words — The human's exact instructions are the most valuable information. Don't rewrite or summarize them.

Mark evolution — If a practice evolved from a previous session, use the evolution-arrow style to mark before/after.

Write for a stranger — Assume the reader is a brand new agent with zero context. Explain every term.

← Previous: Recording & Playback   |   Next: AI JSON Generation →

InkField Tutorial Series — Understanding digital ink, explained simply