Week 102: Looking for Work and still busy with Threlte

3 Feb, 2023

The past few weeks have been super busy. Lesson learned to write weeknotes during the week so I don’t miss a week like I did last week. It does take quite some time to write these though, especially preparing images and GIFs.

Most of the business has been talking to people around potential projects. I’m still exploring what’s next, and have set up a dedicated Looking for Work page. Weekly reminder, please keep booking 30 min with me to talk about anything, but definitely also potential collaboration. I so enjoy these chats, and want to keep them going.

Mario in Threlte land

I’m still exploring the idea of a snazzy portfolio and combining it with my learnings around THREE.js and most recently Svelte and Threlte. I imagine it would be cool if each of my portfolio projects was a little “island” in a larger world. I took that idea further by building on last week’s work.

I decided to switch out the island with with a different model, add a bit of cozy lighting, and animate the clouds to move in and out of the scene.

Screen recording of Mario in a Threlte scene on a forest island

Screen recording of Mario in a Threlte scene on a forest island

LEGO Mario model is from the LDraw unofficial parts library and the little tree patch is Tree Assets by Ben Desai (https://poly.pizza/m/eLqmfpqu_Ig) CC-BY via Poly Pizza.

The workflow of gluing everything together is still a bit cumbersome, but still way less work than having to design the island and Mario model myself:

  1. Generate Mario.mpd model from LDraw
  2. Import Mario.mpd into Blender using LDraw plugin
  3. Convert LEGO materials into web-renderable Principled BSDF materials
  4. Export Mario model to .glb
  5. Run Mario.glb through gltfjsx for compression
  6. Load Mario.glb into Threlte, and do lighting and animation in JavaScript

Bouncing Scratch Blocks

Similar concept to the Mario island, I was thinking of displaying some of my work at MIT Media Lab, designing and developing the Scratch LEGO BOOST extension together with the Scratch Team. Here, I’ve made a 3D version of the blocks and bounce them around using the Rapier physics engine.

Extruded 3D Scratch blocks bouncing around

Extruded 3D Scratch blocks bouncing around

Slightly different workflow here:

  1. Generate scratchblocks.svg representation of blocks using http://scratchblocks.github.io
  2. Import SVG into Sketch, clean up paths, outline text, etc.
  3. Import SVG into Blender
  4. Convert SVG to meshes and extrude
  5. Export scratchblocks.glb
  6. Run scratchblocks.glb through gltfjsx for compression
  7. Set up physics in Threlte and JavaScript

DALL-E to Threlte

The last idea didn’t quite get to where I wanted, but still thought I’d include it for fun. I wanted to play with the idea of generating images using DALL-E, turn each bitmap pixel into a THREE.js geometry and animate all of them.

In the example below I animate each pixel’s position on the z-axis using a sine-function. The performance turned out to be pretty poor, which I suspect might be because of Threlte or how I use instanciated meshes. I tried reducing the image resolution to 64x64 pixels which results in 4,096 geometries, but I still end up with < 10 fps. Might give it another go or seek support in the Threlte community.

&ldquo;DALL-E generated image using the prompt cat in a hat, pixelated into THREE.js geometries&rdquo;

DALL-E generated image using the prompt cat in a hat, pixelated into THREE.js geometries

Workflow:

  1. Use OpenAI’s DALL-E API to retrieve bitmap
  2. Create a <canvas> to put the bitmap on
  3. Read each pixel’s RGB value
  4. Instantiate a THREE.js Box for each pixel

Any other business

There’s quite a lot of admin work in being selfemployed. Although Calendly helps a lot with scheduling chats with people, it’s still a lot of work to reach out to people, and it all takes time away from making.

A few things I’m looking to accomplish next week:

  • Set up an intro.co account to try and use their platform for consultancy
  • Set up a newsletter for these posts for people who don’t want to visit my website to follow along. Looking into using Zapier+Mailchimp for this. If you have any advice please let me know.

I’m also thinking about writing a book called “Tools & Interactions”. Even just putting the idea out there is scary, but I started writing a Table of Contents this week, and I think it might actually be a ton of fun. If anybody knows anything about the history of tools, and especially computers/computing as a tool, I would really appreciate any input.

Week 100: Anniversary and Threlte

21 Jan, 2023

This is not the 100th post, but it is 100 weeks since I started this blog. But other than that it’s been a pretty normal week, and actually slightly slower and less focused week than last week.

Since I enjoy them so much, weekly reminder that you can book me for 30 min to talk about anything. Need a hand with a project? are you a student looking for advice? or do you just want to catch up or geek out about something? Don’t be shy.

Also reminder, I’m available for hire on part-time/short-term projects. I’ll write more about what I’m looking for soon, but this blog, my post from a couple of weeks ago and my main site hopefully gives an indication of my experience and interests. If not, hey, that’s a great opportunity to use the link above and have a chat.

Threlte

I have been developing for the web, either for work or just for fun, since I have had internet access - roughly 27 years now. Despite the chaos of diverging implementations of web standard across browsers, and chaos of programming languages and frameworks, I keep coming back because of the immediacy of the world wide web. After all these years, I still love that I can send a link to anyone, and they can experience and interact with something I made without needing to install anything.

I think there is a huge potential for interactive 3D experiences for the web. I have been playing with THREE.js for a few years now, and last week I focused on learning Svelte. Threlte is basically THREE.js for Svelte.

Threlte is still early in development, but already proving to be a great tool for developing better THREE.js experiences. Besides learning how to build with 3D in a state-driven, component-based way, Threlte provides easy access to some helpful tools I didn’t know, like Threatre.js.

Theatre.js provides an animation library with a great motion design toolset and UI. Here’s a quick sketch I did of LEGO Mario using Threlte and Theatre.js:

Screen recording of a quick jumping Mario animation made using Threlte/Theatre

Screen recording of a quick jumping Mario animation made using Threlte/Theatre

LEGO Mario model is from the LDraw unofficial parts library and the little tree patch is Trees by Poly by Google CC-BY via Poly Pizza.

Next week I’m considering doing some work blending Threlte with DALL-E, since I think it could be fun to render bitmap pixels as 3D cubes, but I’m not committed to it.

Reading

I think I forgot to mention I finished the Earthsea quartet by Ursula K Le Guin. I love Le Guin, and all the themes she treats in these books. Friendships, gender roles, and reflections on living life and a life lived. But the overarching theme of names and naming stood out to me. She explores the idea that people are given three names: a birth name by our parents, a name that reflects how others observe us, and then our true name. Our true name is precious and can only be given to us by someone who has the power of naming, and is only to be revealed to and used by people we trust. There is power in naming, names can be given, and taken away. It feels like a crime to even attempt to unfold this in a simple blog post, so please just go read it.

I’ve now started reading Design Fiction by the Near Future Laboratory.

Week 99: Adventure Writer, GPT and Svelte

13 Jan, 2023

This second week of self-employment has been a really fun mix of interesting conversations and building prototypes. People have already been using the opportunity to book 30 min. with me on Calendly or otherwise contact me. Thank you to everyone who has, and please keep doing so. I have been thorougly enjoying conversations about life, technology, design, careers and potential collaborations. By request, I have also added some time slots that should work for folks over on the US west coast. If you still don’t find times that work for you, let me know.

Adventure Writer

As this year’s first side project, I decided to make an Adventure Writer that lets people write AI-assisted adventures. If you want to give it a try, visit: http://knandersen.github.io/adventure-writer/

Demo recording of Adventure Writer at 2x speed

Demo recording of Adventure Writer at 2x speed

In this post dive into why I chose this project, thoughts behind the design, technical implementation and details, and how I could continue the work.

Background

For the past few years, as also mentioned in last week’s post, I have been curious about the synergy of human and machine intelligence, especially in creative work. With the emergence of readily available large-language models (LLMs), it’s never been easier to experiment with new experiences and interactions.

The goals I had for this project:

  • Become more familiar with LLMs, prototyping with OpenAI models
  • Improve my front-end development skills by learning Svelte
  • Play with the interaction design

Concept

Adventure Writer is fairly simple:

  1. A few starting lines is generated by AI
  2. You continue writing
  3. You can request a couple more sentences from the AI if desired
  4. Repeat 2.-3.

It’s a speculation on how machine intelligence may be able to help you stay in flow so you spend less time creatively blocked, while maintaining creative control.

Interaction Design

I initially developed Adventure Writer for the desktop browser since that allows me to share the prototype anywhere easily. The core interaction with the AI should be effortless and minimally disruptive to the writing flow. To me that meant using a keyboard key for interaction, but it shouldn’t be too complex or use a character used in the writing. Since it’s usually used to switch between interaction elements rather than a text entity, I decided to use the Tab-key.

But once I sent the first prototype to a friend, his first reaction was “oh cool, but doesn’t work on my phone, so I guess I’ll try it at home”. It was a little embarrasing. After a decade of mobile-first development, I should know better. I’m continuously impressed how many typing activities are increasingly possible on mobile, like coding using Repl.it Mobile

So I switched strategy to enable a mobile experience, but this introduced a new challenge - there’s no Tab-key on the phone virtual keyboard, or other unused keys that could be used instead. My solution is to put a floating button right above the keyboard that triggers the AI generation.

Screenshot of mobile interface controls

Screenshot of mobile interface controls

On desktop, typing interactions like deleting a few word or lines are easy, but unfortunately on mobile they’re not. My solution to this is to add a slider that allows you to scale the amount of AI-generated text you’d like to keep.

Slider interaction

Slider interaction

Although it’s a bit buggy due to lack of native ability to detect the on-screen keyboard or align things to it, I’m happy with this highly “thumbable” mobile interaction design concept.

Technical implementation

To retrieve AI-generated text, I use OpenAI’s GPT-3 Text Completion model. I initially used the most capable, but also slower and more expensive Davinci model, but at the time of writing using the faster and cheaper Curie, which seems to be more than capable for this concept. To use the OpenAI API, I wrote a Node.js server that I run on a free Render instance.

When starting the adventure, I send OpenAI the prompt: Start a fictional fairytale story:. When asking to generate additional content I send the prompt: Continue this story:<TEXT> with <TEXT> being what’s been typed so far.

For the website itself, everything is written in Svelte.

Ideas for improvement

This was mostly meant to be a quick and fun learning and prototyping exercise. But if I were to continue the work, here are a few ideas that would be worth exploring:

  • Enable cycling through multiple suggestions instead of just one
  • Fine-tune the AI model on adventures
  • Provide AI story analysis, i.e. is this a “good” adventure?

Reflections on process

It’s generally been fun getting my hands dirty with GPT and Svelte. On the other hand, it was very frustrating spending a disproportionate amount of time debugging weird CSS/HTML/JavaScript issues across the mobile and desktop implementation.

I would rather spend the time doing creative exploration, so for future work, I’ll consider either dropping mobile, sharing demos as videos rather than working prototypes or build native apps. Neither seem optimal to me, so will keep thinking about this.

Week 98: What is Next?

4 Jan, 2023

It’s a new year, and that also marks another end and beginning for me.

I decided to leave my role leading the Tools & Interactions team at Bang & Olufsen, which also means I am looking for what is next for me. If you have interesting ideas, book 30 min. with me on Calendly or contact me.

Reflections

For the past nine years I’ve invented new technologies, products, tools, teams and processes. The only constant has been that the missions and their deadlines were “impossible”, so I’ve had to invent new ways of thinking and doing to make them happen. I’m still curious to do that in an environment where people are excited to learn and build together.

I have spent some time reflecting on the products I have built over the years, as well as the sketching and prototyping environments I built to make them happen. To me, products like LEGO Super Mario and LEGO Education SPIKE Prime are tools for play and learning. And a lot of the prototypes I built to bring them into the world were built as tools for sketching faster and more creatively across design, engineering and business.

At Bang & Olufsen I drove new thinking about redefining audiovisual products, patented a number of technologies to make interactions more intuitive and expressive, and managed a team of young talented designers for the first time. I removed abstractions between design and engineering through closer collaboration and an emphasis on learning and shared responsibility.

Now I’m searching for a new mission and group of people to do something exciting with. Searching for something relentful. Something that has the ambition to build products that are more like tools. Something that takes the best of human and machine intelligence and treats them as complementary. With the new wave of AI technologies, it feels like Mark Andreessen’s old claim that “software is eating the world” is happening again. I’m still curious though, about the interfaces to that software. And what could the physical entrypoints to that software be?

Gathering

The past month, I have been focused on gathering. Listening, reading, talking to people.

I’ve dabbled in learning Svelte and connecting it with THREE.js through the Threlte library. I still build websites and have increasingly used web-based interfaces to rapidly sketch with hardware and software. Maybe I’m wrong on this, but Svelte feels easier and lighter than React. I’m considering respinning my portfolio using Svelte and THREE.js, since it could be a fun project for learning.

Like everyone else, I have also been dabbling in ChatGPT and playing with GPT-3. Although the hype will subside, I believe these AI technologies will change the field of technology products, how they are made, and even what kind of teams will be needed to make them.

Reading

Making

As a learning project for GPT-3, I have been toying with the idea of an Adventure Writer application that supports writing adventures in a way that combines human and machine intelligence. The concept is that you enter a loop where you alternate between writing by yourself and using GPT-3 to generate a few sentences at a time as inspiration. Through a few key interactions like scrolling to increase/decrease the amount of text generated, and clicking words to generate synonyms, I’m hoping to build an interface that stimulates the creative process.

Based on user feedback, I have also been making updates to Morphaweb and added a version number to the website that will help when debugging with users. Curious about automating the versioning through GitHub Actions or something clever - might ask GPT for ideas.

Week 87: Paul Stretch

23 Oct, 2022

12 years ago, while I was studying, I came across an article that described an ambient Justin Bieber song. Sounded like an oxymoron to me, but also intriguing. A few seconds in, I was completely hooked.

I remember taking note that the song was made using something called PaulStretch, but I guess I never investigated further back then. I’ve thought about PaulStretch several times since then, and this week I decided to look into it. Apparently, it’s made by Paul Nasca, and he even put a C++ and python version on GitHub.

I’m always fascinated by people like Paul. He reminds me a bit of Tom Erbe. They seem like people who able to operate at the intersection of mathematics, code, sound in an aesthetic way. And that rather than building just for themselves, are able to provide beautiful sound tools.

One of my piano recordings before stretching:

After 8x stretching:

Next week

Tuesday I’ll be doing a Shader Prototyping workshop hosted by Patricio González Vivo. Really excited to learn from Patricio. He’s the creator of Book of Shaders and I am perpetually in awe of the work he shares in his twitter feed.