Jacob Paris
← Back to all content

#17: April 2023 Roundup

jacobparis.com April 2023 Roundup

This email went out to jacobparis.com's Remix subscribers at the end of April 2023. Those archives and subscribers have been merged into Moulton, so if you were a Moulton subscriber you will not have received this.

Hey folks!

If you're seeing this, it means that at some point in the past few months you read one of my articles and signed up to hear more.

Then, about a week ago, you would have gotten a confirmation email for the mailing list. Why did that take so long? Well, I hadn't actually set up a mailing service yet.

There are SO MANY of them, and I couldn't find one that I liked (tbh I still haven't) but I also didn't want to put it off forever, so I started with a Google Sheet instead.

Heres that story.

πŸ”₯ POST to a Google Sheet from a backend

Did you know integration engineer is an actual job? There are developers who specialize in working with third-party APIs and build adapters to make them work with other third-party APIs. It's a whole thing, and I'm... not here for it.

Integrations are hard, and each one is a potential failure point. When your system relies on third party services, their uptime becomes your uptime.

When I worked at Gitpod, part of my job was collecting emails for whitepapers, webinars, and other gated content. Those sign-ups needed to go into their CRM, Slack, and their email service. But I didn't build integrations for each of those, and it's a good thing too, because since then, they've changed their email service.

Instead, each signup went into a Google sheet, and then I used Zapier to connect that sheet to the other services.

ConvertKit, which I'm using now, doesn't offer a way to just POST subscribers to my list. They want me to use their libraries and client-side forms. But I don't want to slow down my page with their code and harm my viewers with their cookies.

So I'm using a Google Sheet again, and I'm using a Zapier webhook to send the data to ConvertKit.

But while this method lets me stick to building a single integration instead of potentially many, Google isn't much better.

All of the docs for integrating with Google sheets assume you're going to use their client side libraries, and everything server-side assumes you're running on Google Cloud and have direct access.

I don't want to do either of those. THEY HAVE A REST API! JUST LET ME USE THAT!

It took a long time to piece together everything I needed to do to get this working, but I got it working.

With no client side code, no Google Cloud, and no Google libraries, you can fetch an access token and use it to add new rows to a Google sheet from a Node.js backend.

This works SO NICELY with Remix actions

If you're thinking of setting up a newsletter down the line, use this guide to start collecting emails now, and then when you're ready, you won't have to start from zero.

Check out the full guide here

​jacobparis.com/guides/submit-form-google-sheet​

πŸ”₯ Remix feature folders

There are two main ways people organize their code.

One approach is to group files by type. All the components go in one folder, all the pages in another, all the styles in another, etc. Models, views, and controllers each get their own folder.

Beginners often pick this approach because it looks clean, in the same way that a workspace where every tool is in a drawer looks clean. Then, when you need to use a tool, you either have to go to the drawer every time to take it out and put it back, or leave it out on the table.

The other approach is to group files by feature. All the files for a single feature go in one folder. This is like having a toolbox where you keep all the tools you need for a single project together. You might end up with more than one of the same tool, but it's always right where you need it, and if you want to customize the tool for that project, it's not going to affect anything else.

I strongly favor the feature approach, especially for how it enables colocation

  • I write code inline exactly where I need it, without having to think about how this can be re-used.
  • Once it works and covers all the use-cases it needs to, then I extract the logic into its own component or function, but it stays in the same file
  • When it needs to be used in more than one file, I extract it into its own file, but it stays in the same folder, or in a common parent of the places it's used

Most of the time I don't get to the end of this process. We often overestimate how much code will actually be reused and waste time trying to make a one-off component reusable.

Worse still is when we try to use one component for two different use-cases, and end up with a component that's too complex to understand and where small changes have unexpected side-effects.

To take this idea to the logical extreme, I built a new route adapter for Remix.

It allows you to specify .route.tsx files anywhere in your /app directory and they will be recognized as routes

Rather than putting everything in /routes, you can use a more domain-driven feature folders architecture,

  • multiple routes in a folder with colocated assets nearby
  • make an auth module that you can reuse across projects
  • make an unrelated side project on your site that can register its own blog post

I haven't published this as a module yet, and there's no announcement blog post, but you can get it first at this Gist

​gist.github.com/jacobparis/69aa352e38317d3b986a089565f5c1b6​

πŸ”₯ I finally set up a staging environment

When trying to deploy the feature folders, I got it wrong a few times and broke my site. It was only ever broken for a minute, because when the new version failed the healthcheck, my hosting provider (Fly.io) would automatically roll back to the previous version.

But I have enough traffic now that while I'm watching the server logs and see it roll back, I can see incoming requests that are failing and I know that people are seeing errors.

This is great news, technically: Enough people are visiting my site that they notice when it's down.

So it was time to set up a staging environment. Fly actually makes that pretty easy. I duplicated the fly.toml file that configures my production environment and made a fly.staging.toml

Then I just add to run fly launch --config fly.staging.toml to set up and deploy the staging environment.

That was easy enough I should have done this ages ago.

πŸ”₯ Save money by autoscaling your Fly apps to zero when inactive

Now that I had this staging environment, my hosting bill was starting to concern me.

I have A LOT of little side projects and demo examples that I host on Fly, and most of them don't get much of any traffic. But they're all running 24/7, and that adds up.

But Fly is a serverless platform, even though it deals with containers instead of functions like most serverless platforms, so I realized I could use their autoscaling to scale my apps down to zero when they're not being used.

The core idea is that you can set the minimum number of machines your app runs on to 0, and then write a script that kills your server when it's not being used.

That stops Fly from automatically restarting it as it fails, but it still spins up a new one the moment a request comes in.

Surprisingly, the cold start is only about 5 seconds, which is perfectly acceptable for these use-cases, and still faster than even Lambda was a few years back.

Full instructions are in this guide here

​jacobparis.com/guides/fly-autoscale-to-zero​

πŸ”₯ Where to host your Remix app in 2023

Speaking of Fly, I also took a deep dive into the hosting landscape for Remix apps.

There has been A TON of innovation in the cloud hosting space recently. Most of the big hosts have taken steps toward adopting web standards, and the things we can do with serverless functions are rapidly approaching feature parity with traditional servers.

Streaming support has been a big one, with both Vercel and Cloudflare implementing it.

As Remix releases new features like Suspense support or their deferred loaders, people have run into issues where they work locally but eventually realize their host doesn't actually support them.

Despite Vercel's new support for streams, for example, their connections still don't last long enough for practical Server Side Events.

Fastly on the other hand has support for pretty much everything, as long as you're ok with compiling to a WASM environment.

I've compared each of these in detail with comparison tables for both serverless function and long-lived server hosts in this guide. Give it a read!

​jacobparis.com/guides/where-to-host-remix​

πŸ”₯ Find and fix performance bottlenecks in your Remix app with Server Timing

In this journey of self-improvement I decided to give my site a performance audit, and it was not good.

Over 90% of my homepage load time was spent compiling and syntax highlighting some code snippets I use to demo my VS Code themes, which aren't even shown above the fold!

I've removed that entirely and now my site is Blazing Fastβ„’! But the interesting bit is how I found that out.

The web API has a header for sharing server timing information with the browser dev tools, called Server Timing. I first heard about this from one of Kent C Dodds' tweets back in November and it was finally time to try it out.

In your browser dev tools network tab, you can see a breakdown of how long step of the request took.

I started profiling blocks of code and adding their timings to my loader headers, but it was a bit tedious for a lot of reasons. Remix distinguishes between document requests and data requests, so you can set specific headers for either. That's great if you want dynamic cache control, but it makes getting these server timings consistent a bit of a pain.

What's more: Remix also runs loaders in parallel, so you need to wait until they're all done and merge the timings together before you can send them to the browser.

Eventually I came up with a pattern that feels nice to use, and it looks like this

  • time functions that wrap code to clock it, you can use as many of these as you like
  • a getServerTimingHeader function that prints out all the timings
  • a one-liner export to add to each route to merge the headers

With some fancy partial application under the hood, you don't need to track the results of the time functions yourself, they are just automatically linked and ready to print to your request headers.

Full details and source code are in this guide

​jacobparis.com/guides/remix-server-timing​

πŸ”₯ Modern CRUD has no save buttons

Modern CRUD has no save buttons
No loading spinners
All changes are persisted automatically and optimistically
Users expect this now

I tweeted this out recently

Users HATE losing their work. That's nothing new.

It used to be that if users accidentally closed a program or the power went out, and it caused them to lose their work, they would blame themselves. It was their fault for not saving often enough, or not making backups.

But their standards have gone up dramatically, and if they lose their work now, they blame the software.

It's our job as developers to make sure that doesn't happen. As you develop your application, put in safeguards to make sure that users can't lose their work.

Some apps, like Notion, Figma, and Google Docs make all changes live immediately, but they do it right

  • they keep a full history of changes so you can revert at any time
  • you can see a "last updated at" signal so you know it's saving
  • they notify you if you lose internet access and your changes aren't being persisted

Other apps can't publish changes automatically, but they still manage to persist your work so you don't lose any of it.

  • TurboTax doesn't optimistically file your tax return, but if you leave and come back, you don't have to start over
  • You can leave a Slack message half-written and come back to it later, even from a different device
  • In Superhuman, you can draft an email and send it completely offline, and it will send automatically when you're back online
  • VS Code has an autosave feature that saves your work every time you leave the window

Linear does this really well

  • use an explicit save button for creating new issues, but if you leave the new issue page and come back, you can see the same draft data ready to go again.
  • when you edit an issue, changes are made live immediately and you can see your edit in the issue history

Engineering a good user experience is HARD. This is something that we as developers should be thinking about more.

πŸ”₯ Persist data client-side in React with useLocalStorageState

In the spirit of persistent UI, one React hook I really like is useLocalStorageState

It works just like useState, but it persists the data to local storage so you can refresh the page and it will still be there.

More details in this review

​jacobparis.com/guides/use-local-storage-state​

πŸ”₯ The URL is the ultimate global state management tool

The URL is the global state manager everyone has been looking for

​This discussion started after a bunch of discourse involving complicated global state solutions.

Most application "state" is actually just a home-made cache for server data, and the URL is often the best place to store the rest.

In fact, the URL outperforms most state management solutions simply because it's accessible on the server, so you're able to render the correct page with the correct data fully hydrated on the first request instead of loading it later.

The URL is the best place for

  • search filters and queries
  • pagination parameters
  • sort order
  • selected items
  • open tabs
  • modal state
  • and more

This way you can use the browser back button to close modals, or to create shareable state that can be sent to other users.

For more ideas and examples, check out this guide

​jacobparis.com/guides/url-as-state-management​

πŸ”₯ The state of type-safe data fetching

I've been hearing a lot about tRPC lately and decided to dive in and see what it's all about

​This tweet sums it up​

An RPC (Remote Procedure Call) is a way to call a function on a remote server. It's like a REST API, but instead of calling a URL, you call a function. Since calling URLs triggers functions, RPC is basically fetch with the URL built in.

tRPC is a library that lets you define type-safe RPC functions, and I actually really like it

Each function gets registered on the server in a router, and the client also has access to that router. Types for the input arguments and return types are maintained all the way through with full inference, which creates developer experiences like being able to migrate your DB schema and immediately get type errors in your client code so you know what to update.

tRPC is traditionally used with Tan Query (formerly React Query) and Next.js, but it's actually framework agnostic.

Next and Remix alone don't have the whole typesafety story figured out yet, but they're still really good. In both cases you can get full type inference for all queries, but you'll need to do runtime checks for any data sent from the client to the server.

Deep dive below!

​jacobparis.com/guides/type-safe-data-fetching​

πŸ”₯ 3 runtime validation libraries for Typescript that all look the same to me

I used to do all my runtime type-checks and validations manually, but** I really like Zod.**

It takes an input and either returns the input or throws an error. It's really easy to use, and it's really easy to read.

Other libraries like Joi and yup have basically the same API, but their type inference isn't as good.

At this point I don't know why I'd choose anything other than Zod.

​jacobparis.com/blog/typescript-runtime-validation​

πŸ”₯ Tweet: Emojis can be made of multiple emojis

​One fun tweet recently revealed the idea that you can actually iterate through an emoji in javascript.

The family emoji πŸ‘¨β€πŸ‘©β€πŸ‘§β€πŸ‘¦ is actually made up of 4 emojis separated by zero-width joiners, and a for loop can pick all of these

  • πŸ™‹β€β™‚οΈ
  • πŸ™‹β€β™€οΈ
  • πŸ‘§
  • πŸ‘¦

I have a youtube channel?

A few years ago I made a youtube channel and posted one tutorial series with 10 videos, then forgot about it entirely.

Now I have 120 subscribers? I don't know how that happened. Is there any way to merge that channel with my real account? Maybe I should just move there? Who knows.

I don't know what I'm going to do with the channel yet.

​youtube.com/@jacobparis7715​

What's happening next?

I've put out a lot of content this month, and I'm going to keep going with that

But if you go to my website, it's not so easy to find all of it. I'm working on a bit of a redesign to make it easier to navigate, structured kinda like a free course.

I also want to experiment with some **new formats and try some video content. **I'm not sure what that's going to look like yet, but I'm excited to try it out.

Some topics on my mind

  • Building a linear-like app with Remix
  • CRDT is Conflict Free Replicated Data Types. That's how multiplayer apps like Google Docs, Trello, and Figma let multiple users work on the same thing at the same time without constantly overwriting or undoing each other's changes. I want to learn more about that, and explore some example implementations in Remix.
Professional headshot
Moulton
Moulton

Hey there! I'm a developer, designer, and digital nomad building cool things with Remix, and I'm also writing Moulton, the Remix Community Newsletter

About once per month, I send an email with:

  • New guides and tutorials
  • Upcoming talks, meetups, and events
  • Cool new libraries and packages
  • What's new in the latest versions of Remix

Stay up to date with everything in the Remix community by entering your email below.

Unsubscribe at any time.