Rachel Morrison and I have been collaborating on the Cinescope project for five years, and I've refined Rachel's vision of a minimalist scouting app several times now. It has been a constant balancing act between maintaining simplicity and surfacing complex functionality to directors and cinematographers, a user base that places the highest demand on accurate visual fidelity.
📷 The Screen is the UI
A perfect example of the simplicity versus complexity challenge is manual camera controls. We wanted to minimize the addition of UI elements for adjusting focus, brightness, temperature, and tint. After a few failed button interaction-based solutions it dawned on me that we could could support all of these features directly on the surface of the device through gestures alone.
Based on additional user feedback we added a UI element that appears when a manual adjustment is being made. It displays a numerical indicator and tick marks to show current and previous values.
If you want to take a deep-dive into some of the design successes - and failures! - I encountered with Cinescope have a look at this article.
🎥 Sometimes You Can't Google It
The latest release includes support to capture photos and videos with a pre-applied filter. This allows our users to shoot in black-and-white mode while seeing a live preview of exactly what they are capturing. I also included the ability to pause and resume while recording video.
Displaying a live filter preview while capturing stabilized video at high resolutions and framerates is a resource-intensive task. It requires writing a custom rendering engine using MetalKit and CIFilters to directly manipulate the pixel information in the device's sample buffer. Apple's example project is a great starting point for learning about complex video rendering scenarios. Beyond that, information is sparse and often dated. I went through several versions of my rendering engine, relying on feedback from testers, before landing on something that was performant.
I thought something like a pause/resume feature would be relatively easy to implement. 😐🤬☠️ When capturing video you use an AVAssetWriter to append samples to your file. It's not called out in the documentation but the header file explicitly states that multiple sample-writing sessions aren't supported, therefore I couldn't create this feature by simply alternating between the
endSession methods. My solution was to extract the sample timing info from the sample buffer and offset it based on the amount of time the user had paused their recording. One gotcha is that the audio and video buffers live on separate dispatch queues which means their sample buffers get presented to your delegate at different times, so you need to track pause time independently for each stream.
If you want to take a deep-dive into some of the implementation choices I made - and challenges I encountered - while building Cinescope have a look at this article.
🎛 Refining the UX
When I released version 2 of the app I unintentionally complicated the UX for changing camera modes, and it has bugged me since. I decided to revisit the flow while working on version 2.5 because I also wanted to provide a way for a user to quickly toggle the live filter on and off. I created a new UI component I call a "quick menu." The quick menu allows a user to change modes and toggle the currently selected filter from directly on the view surface. I'm happy with this solution because not only does it make it easier for a user to change camera functions, it also embodies the spirit of mechanical camera controls...the inspiration for Cinescope's interface.