Recently I sold my first and probably my last physical camera, a Fujifilm E-X2 with a Fujinon XF 35mm F2 lens. I did it because I was afraid to use it, whenever I thought of going out to take some photos I had to program it, feel in the right mood, feel able to manage an “exceptional” event.
In short, the good part of that experience, I think due largely to the physical controls and the lens, didn’t compensate for the discomfort I felt every time I decided to take it with me. Bags, cleaning accessories, extra batteries and SD cards, fear of losing or irreparably ruining something. Not to mention the embarrassment I felt the moment I take a photo. For me, discretion is crucial, I like doing street/candid photography and having such a flashy and bulky object basically didn’t help me to be “invisible”.
Moreover I’m not a professional photographer and I don’t like the idea of becoming a photography nerd who spends all his money on cameras and accessories. For me, photography is content, the tool must be simple, easy to carry in a pocket, always available, invisible, connected (to my Google Photos library) and last but not least, it must have an all-day battery life.
Actually my Pixel 3a meets most of my requirements and I don’t want to even think to buy an expensive compact camera. In my opinion the pixel 3a is an affordable and perfect AI-driven, smart camera to make street and travel photography. I can extract it from my pocket pressing the power button a couple of times and be ready to shoot in seconds. I agreed with and inherited this vision from Craig Mod’s old essays “Goodbye, Cameras” and “Software is the Future of Photography”.
Beyond the unbridgeable differences with an APS-C camera, starting from the quality of the lens and the size of the sensor, the only thing missing from my smartphone camera app is a “manual mode” to set the “photography triad” (aperture, shutter speed and ISO) and gaining control of basic settings ignoring the software fixes to make a perfectly exposed photo. I want to be able to preserve shooting speed by making conscious, creative errors, the same Clément Chéroux described in his book “Fautographie: petite histoire de l’erreur photographique”.
The Pixel 3a is equipped with a Sony IMX363 CMOS sensor. The size of this sensor is 5.64 ⨯ 4.23mm which is about 4 times smaller than the Fujifilm’s APS-C X-Trans CMOS II (23.6 x 15.6mm).
The exposure triad settings of my X-E2 were selectable almost entirely with dedicated physical controls. This feature, together with a classical viewfinder justified the ownership of an object to just making photos or, by citing again a passage from a more recent Mod’s essay, a “single-minded image-capturing device” describing his Leica Q usage experience.
After a long introduction it seems clear that a Pixel 3a with a camera manual mode could be my ideal camera, and since I haven’t found an app with a “native experience”, I thought and designed a device agnostic manual camera mode by myself.
I tried some manual camera apps from the Google Play Store and ignoring the fact that the Pixel 3a wasn’t fully supported, I found out that all the applications in question tried to mimic DSLRs menus instead of taking advantage of the screen and the gestures offered by a smartphone that make the controls invisible and the UI language visually discreet.
Aperture, shutter speed, ISO, white balancing and manual focus settings are mapped to some smartphone touch gestures by discrete steps which overwrite default automatic values until the shooting session is closed
The switch from a setting to another is initiated by the first mapped gesture which however doesn’t change its value
Gestures directions marked with a (*) should be considered relative to the device orientation
Mockups are a means to demonstrate concepts, settings are approximate and images are not faithfully connected to them.
In the first draft of this story I thought that the best way to set the aperture was to simulate the idea of closing and opening a circle representing the camera shutter by pinching in and out over the image preview. Then I realized that an horizontal swipe gesture was a quick and a one hand friendly gesture to achieve the same result.
Pinch gestures in camera apps are commonly mapped to the zoom functionality. In this concept the smartphone camera lens is considered as a prime lens that doesn’t make use of the digital zoom capabilities offered by the Pixel 3a.
Also for the shutter speed I remapped an initial long press gesture over the screen to a faster vertical swipe gesture.
The idea behind the long press gesture was the connection between the gesture persistence and the length of time the shutter aperture would have to remain open during the shooting sequence.
Consider the lines as a sort of fuse activated with a single tap over the scene preview. When the shutter speed lines are exhausted, they release the camera shutter by actually recording the image on the device as demonstrated in the following sequence.
ISO and white balance
For the ISO and white balance settings I used two sensible stripes that can be activated by horizontally* dragging a marker over the edges of the screens aided by subtle persistent previews of the possible values.
Finally for the manual focus I mapped a vertical* drag gesture over the middle of the screen to set the distance from which to focus the scene. The concept is very reminiscent of the snap focus included in the Ricoh GR series.
I’d be curious to know what you think by writing a response on Twitter.
Mockups have been made with Figma and photos have been captured by me with a Pixel 3a of course 🙂.